Re: PERL - DBI MODULE

2020-06-04 Thread Darren Duncan
The closest thing to what you ask that is actually useful is to install 
DBD::SQLite.  While this is a DBD, it includes the database engine so you have 
full DBI functionality without needing to have a separate database server. -- 
Darren Duncan


On 2020-06-02 9:35 p.m., Pramod Mv wrote:

Hello team

                Could you please let me know how to use the DBI module without a 
DBD module installed ? . can you suggest any alternative.


--
Regards,
Pramod M V


Re: Generic way of fetching a list of databases

2018-11-16 Thread Darren Duncan

What you seek is not possible, in any general sense.

The concept of "what is a database" varies too greatly between different DBMS 
products, even if you restrict yourself to SQL DBMSs.  Loosely the question is 
like asking for a "list of websites".


If what you want is to have a product or service that is DBMS-agnostic, you need 
to more precisely define the criteria for what you want a list of.


Here are some more specific definitions of a database:

- A DBMS process service endpoint or embedded library that you can connect to as 
a client.


- A file or file collection on disk that is a logical database.

- A collection of database table variables that can be addressed within a common 
database query.


- A collection of database table variables that can be collectively subject to a 
common ACID transaction, changes to the collection succeed or no-op as a whole.


- A logical namespace for database table variables.

For some DBMS engines several of the above things are one and the same, while 
with others they are not.


-- Darren Duncan

On 2018-11-15 6:09 PM, Daniel Kasak wrote:

Hi all.

I have a project that has to support pretty much every major database around, 
plus a number of more esoteric "big data" platforms as well. Until now, I've had 
a bunch of methods, implementing fetching databases, schemas, tables, etc, per 
database, with database-specific queries ( eg against information_schema ).


Some of the newer databases I'm trying to support have very little 
documentation, and in some cases no apparent way of fetching in the schema via 
SQL. I've had a conversation with one of the tech support people for a DB 
product who said that there were generic ODBC functions we can call for this 
kind of thing. Is this the case?


I've done quite a bit of search, but can't find any docs that mention fetching 
*databases* - either in ODBC docs or in Perl/DBI docs. The closest I've found 
that *might* have worked was DBI's tables() method:

https://metacpan.org/pod/DBI#tables
   ... but:

- this doesn't work in cases where there is a separation between hierarchies at 
the database level ( eg postgres only lists schemas and tables in the current 
database )

- this isn't returning *anything* for me with any of the ODBC drivers I've tried

So is it possible to retrieve a list of databases in a generic way? Failing 
that, assuming that there *is* some ODBC call ( as suggested by one DB vendor ) 
that I can use, is there some way of calling it from Perl with DBD::ODBC?


Thanks :)

Dan


Re: DBD::mysql next steps

2017-11-10 Thread Darren Duncan

On 2017-11-10 5:58 PM, Dan Book wrote:

On Fri, Nov 10, 2017 at 8:43 PM, Noel Butler wrote:
Given that things are only ever going to move forward with my/maria-sql
would it not be better to enable this by default, and have a "disable"
setting for those who want to run something of antiques?

Because in years to come there will only ever be the modern versions and
you;ll have to change default behaviour then anyway, but since that time is
mostly here now, it makes more sense to enable it by default.

No, this whole issue came about because existing code requires modification to
continue working. Requiring a new option to be set to enable old behavior would
cause the same problem. Backcompat is "forever".


Yes, exactly.

However, other things we CAN do include:

1.  Prominently document the new switch with the strong recommendation that 
people flip it on by default in new code just as part of good housekeeping. 
Basically in the same vein as recommending use-strict or use-warnings without 
turning them on by default.  This also means that all code examples for 
DBD::mysql should make sure to include setting the flag on.


2.  After a reasonable but relatively short delay following the first version 
with this switch, make DBD::mysql issue warnings when it is used without turning 
that switch on.  That should draw the attention of people that they need to 
update their code to work more properly but without actually breaking anything 
(unless someone uses warnings with FATAL, but then they asked for it).  I say 
have a delay so that people who pay attention and would fix their code soon 
don't have reams of warnings appearing in their server logs.  Although in 
general to keep the warning count down it could probably just be issued the 
first time a connection handle is used without the flag.


3.  Make sure that at least the public or known Perl code using DBD::mysql and 
has reasonable control over use of the handle would turn the flag on.


So Darkpan is mainly handled by #1 and #2 such as it can be.

-- Darren Duncan


Re: DBD::mysql next steps

2017-11-10 Thread Darren Duncan

I agree in principle with Patrick's plan.

My strong recommendation for continuing development under a different module 
name was based on the assumption (but not the knowledge) that the Unicode/Blob 
problems were rooted in the DBD::mysql codebase in such a way that they blocked 
the ability to use the newer MySQL/MariaDB/etc client libraries properly and 
that maintaining support for both behaviors user-configurable would be too 
difficult to do without making bugs worse.


However, I also believe that Patrick's proposal (a single DBD::mysql under that 
name where the incompatible new behavior is toggled on a per-connection switch) 
is actually the best and most elegant solution for satisfying all parties under 
the assumption that there are savvy developers who fully understand the problem 
and are able and willing to support such a more-complicated codebase.


-- Darren Duncan

On 2017-11-10 7:13 AM, Patrick M. Galbraith wrote:

Greetings all!

Michiel and I have been talking, weighing options of what course to take in
dealing with moving forward-- with the goal of both offering both stability
and the choice to have the latest functionality and bug fixes as well as
give contributors the opportunity to be part of overall improvements to the
driver.

What we are going to do is:

Add to the connection the ability to turn on proper UTF handling with
'mysql_enable_proper_unicode'. This gives the user volition the option to 
knowingly
toggle whether they want the new functionality and understand that
data structures returned or accepted by the driver might be different
than without this setting.

The other options had their merits, but we think this will solve the issue
while keeping the driver unified and prevent divergence.

Thank you for your input over the last couple months-- we look forward to
moving ahead!

Patrick and Michiel



Re: DBD::mysql path forward

2017-11-09 Thread Darren Duncan

On 2017-11-09 10:50 PM, Alexander Hartmaier wrote:

What about DBD::MariaDB as that‘s the name of the OpenSource version these days?


No, MariaDB is simply a fork of MySQL.  Both MariaDB and MySQL are Open Source. 
Saying only MariaDB is open source is wrong.


I also disagree with using the MariaDB name as that sends the wrong message. 
Keeping the MySQL name says its for all the (API-compatible) forks while using 
the MariaDB name implies that the MariaDB fork is more special than the others.


-- Darren Duncan


Re: DBD::mysql path forward

2017-11-09 Thread Darren Duncan
Michael, why can't you accept moving forward under a new module name?  Why does 
it have to be under the old name?  When people purposefully want to upgrade they 
purposefully choose the new module name in order to do so.  What is the actual 
problem in that? -- Darren Duncan


On 2017-11-09 10:59 PM, Michiel Beijen wrote:

On Fri, Nov 10, 2017 at 7:16 AM, Darren Duncan <dar...@darrenduncan.net> wrote:

I agree with everything Dan said here.  Its what I proposed, in fewer words.
Do all new development under a new name, including all of Pali's work, and
leave the current name for a product with no further effort applied to
develop it. -- Darren Duncan


This is NOT an option to me - it simply can't because the world moves
forward and because of bitrot. The 'old' version - the version that
works for most people, the current version of DBD::mysql, the one
which would then receive no more maintenance as it is no longer
compiles with the latest version of libmysqlclient and it does not
compile with libmariadb. This will only get worse in the future.

I'll stick with my earlier proposal - I'll propose to go back to the
*current* latest DBD::mysql release which does not break backcompat
for our users; add the patches that we discarded when we rolled back
one by one, such as testing on many different lib/db options, memory
leaks and so on, and make a new release so we can be on the move
again.

If possible I'd want to add back the *breaking* unicode changes that
were introduced but they should be either in a separate namespace OR
under a specific configuration option.
Currently this whole thing has cost us loosing MONTHS of progress and
then MONTHS of nothing and that is simply not good.

Patrick: let me know if you're OK with this and then let's get start again!


Re: DBD::mysql path forward

2017-11-09 Thread Darren Duncan

On 2017-11-09 8:32 AM, Dan Book wrote:

It seems to me like the remaining option that can make everyone "happy" is the
previously-suggested option of maintaining a legacy branch and doing new
development (reinstating 4.042) in another branch which will be released as a
new distribution, like DBD::mysql2, by the same maintainers. (I would not favor
DBD::MariaDB as a name, since this new distribution would also be the favored
way to connect to MySQL.) After doing so DBD::mysql can be deprecated, with
migration instructions added to the docs, and only receive critical security
fixes if anything. If done this way the community should not be fractured; the
current module will simply be abandoned for future development, all support and
maintenance will move to the new one. Patrick and Michiel, what do you think?


I agree with everything Dan said here.  Its what I proposed, in fewer words.  Do 
all new development under a new name, including all of Pali's work, and leave 
the current name for a product with no further effort applied to develop it. -- 
Darren Duncan


Re: DBD::mysql path forward

2017-11-09 Thread Darren Duncan
Pali, there's a very simple solution to what you said.  The old DBD::mysql does 
not get further maintenance at all.  It is simply frozen at the 4.041/3 state 
forever.  This serves the primary reason for that to exist, which is that people 
whose package managers automatically upgrade to the highest number of a 
namespace don't introduce new corruption due to a higher version changing 
behavior being relied on.  All development can be under the new namespace full 
stop, assuming what you say is true that no one would want to backport anything 
to the otherwise frozen version. -- Darren Duncan


On 2017-11-09 12:54 AM, p...@cpan.org wrote:

On Tuesday 07 November 2017 13:19:23 Darren Duncan wrote:

A maintenance branch would exist starting from the 4.041/3 release on which
further stable releases named DBD::mysql 4.x would be made, and the primary
goal of this branch is to not break/corrupt anything that relied on 4.041
behavior. So people with legacy projects that just use DBD::mysql and made
particular assumptions on its handling of Unicode or Blobs etc won't have
corruption introduced because DBD::mysql changed its handling of those
things.


As those people or projects misuse some internals of perl we cannot
guarantee anything such that which may be broken or changed by updating
any module from cpan or external source not related to DBD::mysql.

Maintaining such thing is something what I believe nobody wants. For
sure I'm not.

As already stated more times, if there is some code which depends on
some internals, it should stay conserved in tested version as it can
break by updating anything (also unrelated). There is no other option
how to achieve that code stay working and does not corrupt something
else.


People without knowledge of CPAN or the development process will
get something that just continues to work for them and doesn't corrupt.


As explained above (and also in past) it is not possible to guarantee.


For
all intents and purposes this branch would be frozen but could have select
security or bug fixes that comply with the mandate and don't involve
changing Unicode etc.


Are you going to maintain that branch with all problems which it brings?

Or is there anybody else who want to maintain such crap?

If not, then it does not make sense to talk about this. Because nobody
expressed that is going to do such thing for 6 months which is really
large amount of time.


Re: DBD::mysql path forward

2017-11-09 Thread Darren Duncan

On 2017-11-09 12:54 AM, p...@cpan.org wrote:

On Tuesday 07 November 2017 13:19:23 Darren Duncan wrote:

The whole discussion on the mailing lists that I recall and participated in
seemed to consensus on branching DBD::sqlite in order to best satisfy the

   ~~
   DBD::mysql

I hope you are talking about DBD::mysql there...


Yes, that was a typo, I was talking exclusively about DBD::mysql here.

That being said, as an aside, DBD::SQLite (and yes I meant that) is still a 
valuable case study in that they also branched, back 14 years ago when SQLite 3 
came out, using in that case DBD::SQLite2 that bundled the SQLite version 2 and 
repurposing DBD::SQLite name, and bumping from version 0.33 to 1.0, to now 
bundle the incompatible SQLite 3.  In their case, it was a break for people that 
simply upgraded the usual CPAN manner, and the fork was for people that still 
needed to use SQLite 2 in Perl.  DBD::mysql doesn't want to follow their example.


-- Darren Duncan


Re: DBD::mysql path forward

2017-11-07 Thread Darren Duncan
Patrick and Pali, each of you please respond to the lists to confirm that what I 
say below is what you also understand is the primary plan, and if not, then say 
why not; in prior discussion I recall you agreed with it.


Assuming they agree, the concerns of Night Light and those he is speaking for 
should be handily satisfied.


The whole discussion on the mailing lists that I recall and participated in 
seemed to consensus on branching DBD::sqlite in order to best satisfy the needs 
of all stakeholders.


There would be one version control with 2 active release branches.

A maintenance branch would exist starting from the 4.041/3 release on which 
further stable releases named DBD::mysql 4.x would be made, and the primary goal 
of this branch is to not break/corrupt anything that relied on 4.041 behavior. 
So people with legacy projects that just use DBD::mysql and made particular 
assumptions on its handling of Unicode or Blobs etc won't have corruption 
introduced because DBD::mysql changed its handling of those things.  People 
without knowledge of CPAN or the development process will get something that 
just continues to work for them and doesn't corrupt.  For all intents and 
purposes this branch would be frozen but could have select security or bug fixes 
that comply with the mandate and don't involve changing Unicode etc.


The master branch would then have all the short term breaking changes including 
the 4.042 Unicode fixes and would have all the significant feature changes 
including compatibility with newer MySQL major versions.  All of its releases 
would be under a new namespace such as DBD::mysql2 and start at version 5.0 to 
signify that large changes happened which might possibly break code or cause 
corruption if user code doesn't specifically account for the differences.  Being 
a different name this is strictly opt-in, so the only ones using DBD::mysql2 are 
those that explicitly opted in to it, and by doing so they also opted in to 
fixing anything with their code that needed fixing to not corrupt data while 
using it.


The concept of having a single driver with toggled behavior eg a 
mysql_enable_proper_unicode flag was already rejected as unwieldy to implement, 
especially for such deep rooted changes as properly handling Unicode.


Note that I expect that most other higher-level projects that use MySQL would 
NOT branch like this, instead having internal logic or their own toggle to work 
with DBD::mysql vs DBD::mysql2, or a few may branch, but the key thing is that 
branching DBD::mysql does NOT necessitate doing so in the rest of the ecosystem.


-- Darren Duncan

On 2017-11-07 12:07 PM, Night Light wrote:

Proposed, but no made decisions posted afterwards.

The last proposal is to re-commit the rejected 4.042 changes into the 4.043
master branch and only work on fixes that came after June.

The git issue that regards improperly encoding of BLOBs is opened on April 6th
(hence me sending the message to prevent a recurring cycle).

https://github.com/perl5-dbi/DBD-mysql/issues/117

On Tue, Nov 7, 2017 at 8:57 PM, Michiel Beijen wrote:

To me, the only real option is to make a new option in DBD::mysql;
mysql_enable_proper_unicode or the like which you would knowingly set
in your application code, which will expose the new behaviour. I
understand this is difficult, but I really think it's the only way.

If in the short term this is not feasible, it *could* be possible, in
my eyes, to release a DBD::mysql2 or similar that does *correct*
behaviour. Also in that case this is something the application
developer should set explicitly in his connection string.

This DBD::mysql2 or similar could live in another git repository, but
preferably in the same repo, and even in the same CPAN distribution as
DBD::mysql, and eventually the goal should be that they re-unite and
using DBD::mysql2 would really be the same as to use the
'mysql_enable_proper_unicode' option in DBD::mysql.

--
Michiel

On Tue, Nov 7, 2017 at 7:41 PM, Darren Duncan <dar...@darrenduncan.net
<mailto:dar...@darrenduncan.net>> wrote:
> My understanding from the last discussion on this matter is that it was
> agreed DBD::mysql would be forked such that the existing DBD::mysql name
> would be frozen at 4.041/3 indefinitely and that the 4.042 changes plus 
any
> further feature development would take place under a new name such as
> DBD::mysql2 version 5.0.


Re: DBD::mysql path forward

2017-11-07 Thread Darren Duncan
My understanding from the last discussion on this matter is that it was agreed 
DBD::mysql would be forked such that the existing DBD::mysql name would be 
frozen at 4.041/3 indefinitely and that the 4.042 changes plus any further 
feature development would take place under a new name such as DBD::mysql2 
version 5.0.


So those people using "DBD::mysql" would not experience any changes in behavior 
from what they were used to and BLOB etc should not break.  Whereas people that 
want the Unicode fixes and other features would use the DBD::mysql2 and fix 
their code to be compatible with its breaking changes.


I thought this was settled, and takes care of your concerns, so it was just a 
matter of "make it so".


(Anyone replying to this message/thread, please do NOT include my personal email 
address as an explicit recipient, only send it to the list addresses, I will get 
those.)


-- Darren Duncan

On 2017-11-07 3:41 AM, Night Light wrote:

For the reason of "silence":
I've spoken to other users to hear that they have passively withdrawn from this
discussion as they are not motivated to be part of a release where concerns
about backwards compatibility are ignored. One of the users wrote earlier
(replace the word "sector" with "release"):

"When you're dealing with software that's purpose in life is to not corrupt
data, and have data there tomorrow, you go out of your way not to break that
promise. There's no point in being involved in this sector if you don't care to
deliver on that promise."

Re-releasing the 4.042 changes will break the contract of a long-standing
interface and corrupt all BLOB data when using "do()". These changes do
therefore more harm than good.
Putting these utf8 changes in the freezer until a suggestion is made that will
add the functionality instead of replacing it is not a sin. The PG driver has
for instance also a similar issue open without plans to fix it anytime soon.
https://rt.cpan.org/Public/Bug/Display.html?id=122991

What is your objection against using the current 4.043 branch and work on
outstanding fixes, do a pre-release, a period of time for people to test/try
out, then release?


Re: Fork DBD::mysql

2017-10-11 Thread Darren Duncan

On 2017-10-11 10:05 AM, Vincent Veyron wrote:

On Wed, 11 Oct 2017 15:09:49 +
Steven Lembark <lemb...@wrkhors.com> wrote:


I really am curious to see any example of something in your database that
can be handled more gracefully in MySQL than well-designed Pg.


Seconded, I was wondering the same thing.


While we're on that wagon, thirded.

-- Darren Duncan


Re: DBD::mysql path forward

2017-09-19 Thread Darren Duncan
What Night Light's post says to me is that there is high risk of causing data 
corruption if any changes are made under the DBD::mysql name where DBD::mysql 
has not been exhaustively tested to guarantee that its behavior is backwards 
compatible.


This makes a stronger case to me that the DBD::mysql Git master (that which 
includes the 4.042 changes and any other default breaking changes) should rename 
the Perl driver package name, I suggest DBD::mysql2 version 5.0, and that any 
changes not guaranteed backwards compatible for whatever reason go there.


If the Git legacy maintenance branch 4.041/3 can have careful security patches 
applied that don't require any changes to user code to prevent breakage, it gets 
them, and otherwise only DBD::mysql2 gets any changes.


By doing what I said, we can be guaranteed that users with no control over how 
DBD::mysql gets upgraded for them will introduce corruption simply for upgrading.


-- Darren Duncan

On 2017-09-19 5:46 AM, Night Light wrote:

Dear Perl gurus,

This is my first post. I'm using Perl with great joy, and I'd like to express my
gratitude for all you are doing to keep Perl stable and fun to use.

I'd like to ask to object to re-releasing this version and discuss on how to
make 4.043 backwards compatible instead.
This change will with 100% certainty corrupt all BLOB data written to the
database when the developer did not read the release notes before applying the
latest version of DBD::mysql (and changed its code consequently).
Knowing that sysadmins have the habit of not always reading the release notes of
each updated package the likelihood that this will happen will therefore high.
I myself wasn't even shown the release notes as it was a dependency of an
updated package that I applied.
The exposure of this change is big as DBD::mysql affects multiple applications
and many user bases.
I believe deliberately introducing industry wide database corruption is
something that will significantly harm peoples confidence in using Perl.
I believe that not providing backwards compatibility is not in line with the
Perl policy that has been carefully put together by the community to maintain
the quality of Perl as it is today.
http://perldoc.perl.org/perlpolicy.html#BACKWARD-COMPATIBILITY-AND-DEPRECATION

I therefore believe the only solution is an upgrade that is by default backwards
compatible, and where it is the user who decides when to start UTF8 encode the
input values of a SQL request instead.
If it is too time consuming or too difficult it should be considered to park the
UTF8-encoding "fix" and release a version with the security fix first.

I have the following objections against this release:

1. the upgrade will corrupt more records than it fixes (it does more harm than 
good)
2. the reason given for not providing backward compatibility ("because it was
hard to implement") is not plausible given the level of unwanted side effects.
   This especially knowing that there is already a mechanism in place to signal
if its wants UTF8 encoding or not (mysql_enable_utf8/mysql_enable_utf8mb4).
3. it costs more resources to coordinate/discuss a "way forward" or options than
to implement a solution that addresses backwards compatibility
4. it is unreasonable to ask for changing existing source knowing that depending
modules may not be actively maintained or proprietary
   It can be argued that such module should always be maintained but it does not
change the fact that a good running Perl program becomes unusable
5. it does not inform the user that after upgrading existing code will start
write corrupt BLOB records
6. it does not inform the user about the fact that a code review of all existing
code is necessary, and how it needs to be changed and tested
7. it does not give the user the option to decide how the BLOB's should be
stored/encoded (opt in)
8. it does not provide backwards compatibility
   By doing so it does not respect the Perl policy that has been carefully put
together by the community to maintain the quality of Perl as it is today.
   
http://perldoc.perl.org/perlpolicy.html#BACKWARD-COMPATIBILITY-AND-DEPRECATION
9. it blocks users from using DBD::mysql upgrades as long as they have not
rewritten their existing code
10. not all users from DBD::mysql can be warned beforehand about the side
effects as it is not known which private parties have code that use DBD::mysql
12. I believe development will go faster when support for backwards
compatibility is addressed
13. having to write 1 extra line for each SQL query value is a monks job that
will make the module less attractive to use

About forking to DBD::mariadb?:
The primary reason to create such a module is when the communication protocol of
Mariadb has become incompatible with Mysql.
To use this namespace to fix a bug in DBD::mysql does not meet that criteria and
causes confusion for developers and unnecessary pollution of the DBD namespace.

---

For peop

Re: DBD::mysql path forward

2017-09-14 Thread Darren Duncan

On 2017-09-14 3:01 AM, H.Merijn Brand wrote:

On Thu, 14 Sep 2017 09:44:54 +0200, p...@cpan.org wrote:


BYTE/BLOB/TEXT tests require three types of data

• Pure ASCII
• Correct UTF-8 (with complex combinations)


subtest: Correct UTF-8 TEXT with only code points in range U+00 .. U+7F (ASCII 
subset)
subtest: Correct UTF-8 TEXT with only code points in range U+00 .. U+FF (Latin1 
subset)


ASCII:U+00 .. U+7F
iso-8859-*: + U+80 .. U+FF (includes cp1252)
iso-10646:  + U+000100 .. U+0007FF
+ U+000800 .. U+00D7FF
+ U+00E000 .. U+00
utf-8 1):   + U+01 .. U+10
+ surrogates
+ bidirectionality
+ normalization
+ collation (order by)

1) some iso-10646 implementations already support supplementary
   codepoints. Depends on the version of the standard


Regarding Unicode subtests I was going to respond to Pali's comment to say that 
there are more important ranges; H.Merijn addressed the main points I was going 
to raise, however I propose a simpler set of tests as being the main ones of 
importance for data being handled without corruption.


Note, these comments apply to ALL DBI drivers, not just DBD::mysql.

There are 7 main Unicode integer codepoint ranges of interest, representable by 
a signed 32-bit integer:


- negative integers rejected invalid data
- 0  ..0x7F - ASCII subset accepted
- 0x80   ..0xFF - non ASCII 8-bit subset accepted
- 0x100  ..0xD7FF   - middle Basic Multilingual Plane accepted
- 0xD800 ..0xDFFF   - UTF-16 surrogates rejected invalid data
- 0xE000 ..0x   - upper Basic Multilingual Plane accepted
- 0x1..0x10 - the 15 supplementary planes accepted
- 0x11000 and above rejected invalid data

I would argue strongly that a transit middleware like a DBI driver should 
strictly concern itself with the Unicode codepoint level and that it shuttles 
data back and forth preserving the exact valid codepoints given, while rejecting 
invalid codepoints for both input and output either with an error or use of the 
unicode substitution character 0xFFFD.


While Perl itself or MySQL itself can concern itself with other matters such as 
graphemes/normalization/collation/etc, a DBI driver should NOT.


Besides being logically correct, this means that DBI drivers can avoid needing 
code for the most complicated aspects of Unicode, they can avoid 99% of the 
complexity.


-- Darren Duncan


Re: DBD::mysql path forward

2017-09-13 Thread Darren Duncan

On 2017-09-13 12:58 PM, Dan Book wrote:

On Wed, Sep 13, 2017 at 3:53 AM, Peter Rabbitson wrote:

On 09/12/2017 07:12 PM, p...@cpan.org wrote:
And here is promised script:


The script side-steps showcasing the treatment of BLOB/BYTEA columns, which
was one of the main ( albeit not the only ) reason the userbase lost data.

Please extend the script with a BLOB/BYTEA test.

I'm not sure how to usefully make such a script, since correct insertion of BLOB
data (binding with the SQL_BLOB type or similar) would work correctly both
before and after the fix.


Perhaps the requirement of the extra tests is to ensure that BLOB/BYTEA data is 
NOT mangled during input or output, that on input any strings with a true utf8 
flag are rejected and that on output any strings have a false utf8 flag.  Part 
of the idea is to give regression testing that changes regarding Unicode 
handling with text don't inadvertently break blob handling. -- Darren Duncan


Re: DBD::mysql path forward

2017-09-13 Thread Darren Duncan

On 2017-09-13 6:31 AM, p...@cpan.org wrote:

On Tuesday 12 September 2017 11:32:36 Darren Duncan wrote:

Regardless, following point 2, mandate that all Git pull requests are made
against the new 5.x master; the 4.x legacy branch would have no commits
except minimal back-porting.


New pull requests are by default created against "master" branch. So if
5.x development would happen in "master" and 4.x in e.g. "legacy-4.0"
then no changes is needed.

But on github it is not possible to disallow users to create new pull
requests for non-default branch. When creating pull request there is a
button which open dialog to change target branch.


When I said "mandate" I meant as a matter of project policy, not on what GitHub 
enforces.  Strictly speaking there are times where a pull request against the 
legacy branch is appropriate. -- Darren Duncan


Re: DBD::mysql path forward

2017-09-12 Thread Darren Duncan

On 2017-09-12 6:45 PM, Patrick M. Galbraith wrote:

Darren,

I agree with this as well, with the exception of 4 and 5, keeping 5.0 "pure" to
the new way of doing things.


I very much agree, pure is best.  I only suggested 4 and 5 as half-hearted 
options because I knew some projects in our community liked to do things like that.



For releases, I think I want to understand what this will mean. Sooner or later,
a release shows up in as a distribution package, installed on the OS, and I want
to know what the way of communicating that to the users and vendors so
expectations are met. That's another question of "how do we handle that?" and
"how do we inform an OS packager/vendor, what to do?

Thank you for the great discussion!


Based on my experience, the fact the major version number is being changed from 
4 to 5 will communicate a lot by itself.


Many software projects use semantic versioning, see http://semver.org for a 
formal definition adopted by SQLite among other projects, and all the common 
package managers and package repositories understand it.


Generally speaking, if the first number in a dot-separated version number is 
incremented, it is treated as being a major version, and then any version 
numbers having that first part in common are considered minor versions of the 
same major.


Given the default behavior of package managers, any users performing a generic 
"keep me up to date with all the patches and bug fixes" should receive the 
latest version sharing the major version they already have, and going to a new 
major version would require a more explicit "install this separate thing" on the 
user's behalf.


Tangentially to this, this DBD::mysql is a CPAN module, CPAN conventions can 
also be followed by the package managers.


As for giving explicit communication to repository maintainers eg Debian and 
CentOS and whatever, I think that's a matter of direct human communication. 
Perhaps have the person managing the packages for MySQL itself handle it.


This all being said, for people using the CPAN clients, I believe their rules 
are simpler and they simply always get the highest numbers, ignoring major/minor 
stuff, except for Perl itself.


For the sake of CPAN clients, another possible alternative is to fork the 
namespace.  The current DBD::mysql name either means the old legacy version or 
the new better version, and then you pick a new name for the complement.


A semi-relevant example of this is DBD::SQLite back in 2003 when SQLite 3 came 
out.  In that case, SQLite 3 was a complete break from 2, neither could read the 
others' files.  They renamed the old DBD::SQLite to DBD::SQLite2 (or some such) 
which was the one for SQLite 2, and the plain DBD::SQLite name then meant SQLite 
3.  That being said, all the SQLite 2 ones were 0.x versions, and then version 
1.0 was the first SQLite 3 version.  So the fact that prior to the break the 0.x 
semantics of (anything might break) made the decision easier.


I think the decision then of which MySQL client keeps the old name depends on 
what you want to happen when users simply "get the latest" without testing.  If 
the latest features and fixes stay with the old name, then the name for the 
legacy fork could be DBD::mysql::LegacyUnicode 4.044 AND DBD::mysql jumps to 
5.0.  If the latest features go to a new name, then you could have eg 
DBD::mysql2 version 1.0; it would be like a fork, but its official.  The former 
approach will mean users with broken code will have to update their code to the 
new module name but won't have to fix their Unicode.  The latter approach means 
people wanting the new stuff change the driver name they call.


Another option that lets you keep the same name for both but requires other 
parts of the ecosystem to at least update some code, is you stick with the 
originally discussed 4-5 move, and other middleware modules add explicit 
internal code paths for "if DBD::mysql version >= 5 then expect correct Unicode 
behavior and otherwise expect and compensate for broken Unicode"; I expect 
DBIx::Class and friends will all add such checks anyway, short of hard requiring 
the new version.


Basically, its all about tradeoffs, who you want to do what work and who you 
want to be able to do zero work.


-- Darren Duncan


Re: DBD::mysql path forward

2017-09-12 Thread Darren Duncan

On 2017-09-12 11:05 AM, p...@cpan.org wrote:

On Tuesday 12 September 2017 19:00:59 Darren Duncan wrote:

I strongly recommend that another thing happen, which is
re-versioning DBD::mysql to 5.0.

1. From now on, DBD::mysql versions 4.x would essentially be frozen
at 4.041/4.043.

2. From now on, DBD::mysql versions 5.x and above would be where all
active development occurs, would be the only place where Unicode
handling is fixed and new MySQL versions and features are supported,
and where other features are added.


I'm fine with it. Basically it means reverting to pre-4.043 code and
continue *git* development there with increased version to 5.x. And once
code is ready it can be released to cpan as normal (non-engineering) 5.x
release.


Yes, as you say.  With respect to Git I propose doing this immediately:

1. Create a Git tag/branch off of 4.043 which is the 4.x legacy support branch.

2. Revert Git master to the pre-4.043 code and then follow that with a commit to 
master that changes the DBD::mysql version to 5.0.


Optionally a DBD::mysql version 5.0 could then be released, from the new master, 
that matches 4.042 in content, or otherwise an additional round of planning or 
public consultation could happen first in case any other possible breaking 
changes want to be introduced first.


Regardless, following point 2, mandate that all Git pull requests are made 
against the new 5.x master; the 4.x legacy branch would have no commits except 
minimal back-porting.


-- Darren Duncan


Re: DBD::mysql path forward

2017-09-12 Thread Darren Duncan

On 2017-09-12 8:54 AM, Dan Book wrote:

On Tue, Sep 12, 2017 at 11:04 AM, Patrick M. Galbraith wrote:
Pali,
Yes, I agree, we'll have to create a fork pre revert and stop accepting PRs
How might we allow people time to test the fixes to give them time? Just
have them use the fork, I would assume?

To be clear, this sounds like a branch not a fork. If your plan is to reinstate
the mysql_enable_utf8 behavior as in 4.042 rather than adding a new option for
this behavior, then branching from 4.042 seems reasonable to me; but you should
be very clear if this is your intended approach, as this is what led to many
people corrupting data as they send blobs to mysql with the same
mysql_enable_utf8 option, and expect them to accidentally not get encoded.


Assuming that broken Unicode handling has been in DBD::mysql for a long time and 
that users expect this broken behavior and that fixing DBD::mysql may break user 
code making those assumptions...


I strongly recommend that another thing happen, which is re-versioning 
DBD::mysql to 5.0.


A declared major version change would signify to a lot of people that there are 
significant changes from what came before and that they should be paying closer 
attention and expecting the possibility that their code might break unless they 
make some adjustments.  Without the major version change they can more easily 
and reasonably expect not having compatibility breaks.


Part and parcel with this is that only DBD::mysql 5.0 would have the other 
changes required for compatibility with newer MySQL versions or features that 
would be the other more normal-sounding reasons to use it.


So I specifically propose the following:

1. From now on, DBD::mysql versions 4.x would essentially be frozen at 
4.041/4.043.  They would expressly never receive any breaking changes (but see 
point 3) and in particular no Unicode handling changes.  They would expressly 
never receive any new features.  This is the option for people whose codebases 
and environments work now and want to leave it alone.


2. From now on, DBD::mysql versions 5.x and above would be where all active 
development occurs, would be the only place where Unicode handling is fixed and 
new MySQL versions and features are supported, and where other features are 
added.  Version 5.0 specifically would have all of the backwards-breaking 
changes at once that are currently planned or anticipated in the short term, in 
particular fixing the Unicode.  Anyone who is already making changes to their 
environment by moving to a newer MySQL version or want newer feature support 
will have to use DBD::mysql 5, and in the process they will have to go through 
their Perl codebase and fix anything that assumes Unicode is broken.


3. As an exception to the complete freeze otherwise mentioned, there could be 
one or more DBD::mysql 4.x release whose sole purpose is to back-port security 
fixes or select other bug fixes from 5.x.  Also 4.x could gain minimalist 
documentation changes to indicate its new legacy status and point to 5.x.


4. Optional bonus 1:  If it is reasonably possible to support both correct 
Unicode handling as well as the old broken handling in the same codebase, I 
suggest DBD::mysql 5.0 could also have a user config option where one could 
explicitly set it to make DBD::mysql use the old broken behavior, while the 
default would be to have the correct behavior.  This would be analogous to 
Perl's "use experimental" feature.  The idea is that it would provide a 
temporary stopgap for users migrating from 4.x where they could just make the 
minimal change of enabling that option but they otherwise wouldn't yet have to 
go through their codebase to fix all the assumptions of wrongness.  Using this 
option would cause deprecation warnings to be emitted.  Perhaps the feature 
could be lexical or handle-specific if appropriate to help support piecemeal 
migration to correct assumptions.  Alternately it may be be best to not have 
this option at all and people simply have to fix their code on moving to 5.x. 
This matter is probably its own discussion, but the key part is it only applies 
to 5.x, and meanwhile the 5.x default is correct Unicode while 4.x freezes the 
old bad Unicode behavior.


5. Optional bonus 2:  A user config option could be turned on but all that it 
does is emit warnings in situations where the Unicode behavior was fixed, to 
prod users to check that particular spot in their code that may have been 
impacted, this would be more like a typical "use warnings" maybe.  This is just 
a tool to help users convert their code and find impacted areas.


Note, 4.x versions have already been in use for a full decade, so there 
shouldn't be new version fatigue.


So Patrick and Pali and others, what think of my proposal?

-- Darren Duncan


Re: Fork DBD::mysql

2017-08-28 Thread Darren Duncan

On 2017-08-28 12:00 PM, Alexander Foken wrote:

On 28.08.2017 18:19, Darren Duncan wrote:

While a fork may be the best short term fix, as keeping up with security
issues is important, ...


Not being able to connect to MySQL / MariaDB from Perl is not acceptable. Having
tons of known bugs, especially security-related ones, is as bad. Being forced to
take the long detour through DBD::ODBC, an ODBC driver, and a MySQL / MariaDB
ODBC driver is not pretty, and it won't be faster than the native DBD::mysql
driver.

Indentionally breaking DBD::mysql to drive people away from MySQL / MariaDB wont
work. Instead, people will search for another language that does support
connecting to MySQL / MariaDB, like PHP, Python or Java. And so, they will use
MySQL / MariaDB without Perl.


I agree with what you said, and I said as much myself (quoted above).

The I look at it is that keeping a good quality DBD::mysql is important but that 
it would be conceptually a LEGACY SUPPORT feature.


This is similar to how Perl supports, and should support, talking with any tools 
and systems that people have, so people always have the option to use Perl, even 
when the things they're working with aren't the best choices, its still what 
they have.


And Perl's support for MySQL should remain high quality as possible as long as 
it exists.


But I believe that's mainly a short term solution, and the longer term solution 
is to migrate to better DBMSs.


-- Darren Duncan


Re: Fork DBD::mysql

2017-08-28 Thread Darren Duncan

On 2017-08-28 8:55 AM, p...@cpan.org wrote:

Because of the current situation we in the GoodData company are thinking
about forking DBD::mysql. We use DBI not only with DBD::mysql, but also
with DBD::Pg. The annoying bugs in DBD::mysql still require hacks and
workarounds, while similar bugs were fixed in DBD::Pg years ago. Also,
inability to use the new version of MySQL or MariaDB is a problem. Same
for the open security issues.

I would like to ask, whether somebody else is interested in DBD::mysql
fork? Either as a user or as a developer? Maintaining a fork isn't a
simple task, but with supporting users and contributing developers it
should be easier.


While a fork may be the best short term fix, as keeping up with security issues 
is important, I honestly believe that the best fix is to migrate to Postgres as 
soon as you can.  You already use Pg so the experience in your company is there. 
 I'd say move all your MySQL projects to Pg and then you won't have to think 
about MySQL anymore.  Its not just about the Perl driver, but the differences in 
quality/features/etc of the DBMSs themselves. -- Darren Duncan


Re: suppress quoting in prepared sql

2016-04-05 Thread Darren Duncan

Replying to myself...

Postgres can do what I speak of directly, see 
http://search.cpan.org/~turnstep/DBD-Pg/Pg.pm#Array_support but if MySQL doesn't 
support that or something similar, there's another of many reasons for you to 
use Postgres instead.


-- Darren Duncan

On 2016-04-05 6:28 PM, Darren Duncan wrote:

The most elegant solution is to pass a single array-typed value from Perl to SQL
somehow AS a single array-typed value, so that a single placeholder is 
appropriate.

Given the limitations of some SQL DBMSs to pass arrays (or tables of a single
column), Chris Howard's solution is probably the best one in regards for
maintaining data types and such.

Another alternative is to use an encoding/decoding process for transport, so you
can fake passing an array value.

How this works is that on the Perl side you join the list of array elements into
a single string value with some kind of separator, which could be a comma or a
vertical bar or whatever you like.

On the SQL side you have a prepared statement with just the 1 placeholder for a
string value, and this is an argument to a SQL function that splits the elements
into a SQL ARRAY or TABLE or ROW and then that is the input to the IN.

The second solution may require a more advanced DBMS rather than a less advanced
one but it is functionally closest to what you wanted to do, aside from having
actual array-valued parameters.  The temporary table solution would work on more
SQL DBMSs but it means there's the separate loading operation.  Either way
though, the main query only has to be PREPAREd once.

-- Darren Duncan

On 2016-04-05 10:44 AM, Howard, Chris wrote:

Insert "in" values in a table.
Run the query with a sub-select or join against the table.



-Original Message-
From: Paul DuBois [mailto:p...@snake.net]
Sent: Tuesday, April 05, 2016 11:37 AM
To: Vaughan, Mark
Cc: Bruce Ferrell; dbi-users@perl.org
Subject: Re: suppress quoting in prepared sql



On Apr 5, 2016, at 12:29 PM, Vaughan, Mark <mark.vaug...@neustar.biz> wrote:

This works if the number of elements remains static. You'd have to run the
prepare again if the number of elements changes.


Sure. But that's true no matter how you construct your statement to be prepared.



Mark Vaughan
Neustar, Inc. / Lead Consulting Services Consultant, Professional Services
8532 Concord Center Drive, Englewood, CO 80112, USA
Office: +1.303.802.1308  Fax: +1.303.802.1350  /  mark.vaug...@neustar.biz


-Original Message-
From: Paul DuBois [mailto:p...@snake.net]
Sent: Tuesday, April 05, 2016 11:25 AM
To: Bruce Ferrell <bferr...@baywinds.org>
Cc: dbi-users@perl.org
Subject: Re: suppress quoting in prepared sql



On Apr 5, 2016, at 11:55 AM, Bruce Ferrell <bferr...@baywinds.org> wrote:

Ick!

ok, I have to dynamically build the IN clause of the prepare as a
static sql statement


Yep. This is how I do it for a given array of values:

# Create a string of placeholder characters, with one ? character # per
element in an array of values.

my @values = (1, 2, 3, 4, 5);

my $str = join (",", ("?") x @values);

Then interpolate $str into your query string.



On 4/5/16 9:32 AM, Vaughan, Mark wrote:

 From the DBI documentation
(https://urldefense.proofpoint.com/v2/url?u=https-3A__metacpan.org_p
od_DBI-23Placeholders-2Dand-2DBind-2DValues-29-3A=CwIF-g=MOptNlV
tIETeDALC_lULrw=rwT9R07bCzfhX6apOj8NoPX-TbEkSSLuFkjri49xQ-0=QpMl
4dk0ZSYHx2vhZSJDCeS1tdTQ9Z8GWCyZqgIjc28=2uZZNLLOkgh5xJfTn_SVli361r
ZOaGOrDxGPv_yVwd8=


Also, placeholders can only represent single scalar values. For example,
the following statement won't work as expected for more than one value:

"SELECT name, age FROM people WHERE name IN (?)"# wrong
"SELECT name, age FROM people WHERE name IN (?,?)"  # two names

You may have to prepare the query each time unless you have a fixed number
of elements in the IN clause.

HTH,
Mark Vaughan
Neustar, Inc. / Lead Consulting Services Consultant, Professional
Services
8532 Concord Center Drive, Englewood, CO 80112, USA
Office: +1.303.802.1308  Fax: +1.303.802.1350  /
mark.vaug...@neustar.biz


-Original Message-
From: Bruce Ferrell [mailto:bferr...@baywinds.org]
Sent: Tuesday, April 05, 2016 10:24 AM
To: dbi-users@perl.org
Subject: suppress quoting in prepared sql

I'm generating a sql statement like this:

sth  = $mysql_dbh->prepare(
"select sum(column) as columnSum from table where value in ( ? ) and
row_date between cast( ? as date) and cast( ? as date) ");

sth->execute( $ValueIDs ,$week_start_date,$week_end_date);

$ValueIDs is a series of unquoted values:

01161,01162,01262,01147,01034,01125,01125,01017,01125,01278,01204,011
64

When observed at the mysql server, the sql appears as follows:

select sum(column) as columnSum where value in (
'01161,01162,01262,01147,01034,01125,01125,01017,01125,01278,01204,01
164' ) and row_date between cast( '2016-03-29' as date) and cast(
'2016-04-05' as date

Re: suppress quoting in prepared sql

2016-04-05 Thread Darren Duncan
The most elegant solution is to pass a single array-typed value from Perl to SQL 
somehow AS a single array-typed value, so that a single placeholder is appropriate.


Given the limitations of some SQL DBMSs to pass arrays (or tables of a single 
column), Chris Howard's solution is probably the best one in regards for 
maintaining data types and such.


Another alternative is to use an encoding/decoding process for transport, so you 
can fake passing an array value.


How this works is that on the Perl side you join the list of array elements into 
a single string value with some kind of separator, which could be a comma or a 
vertical bar or whatever you like.


On the SQL side you have a prepared statement with just the 1 placeholder for a 
string value, and this is an argument to a SQL function that splits the elements 
into a SQL ARRAY or TABLE or ROW and then that is the input to the IN.


The second solution may require a more advanced DBMS rather than a less advanced 
one but it is functionally closest to what you wanted to do, aside from having 
actual array-valued parameters.  The temporary table solution would work on more 
SQL DBMSs but it means there's the separate loading operation.  Either way 
though, the main query only has to be PREPAREd once.


-- Darren Duncan

On 2016-04-05 10:44 AM, Howard, Chris wrote:

Insert "in" values in a table.
Run the query with a sub-select or join against the table.



-Original Message-
From: Paul DuBois [mailto:p...@snake.net]
Sent: Tuesday, April 05, 2016 11:37 AM
To: Vaughan, Mark
Cc: Bruce Ferrell; dbi-users@perl.org
Subject: Re: suppress quoting in prepared sql



On Apr 5, 2016, at 12:29 PM, Vaughan, Mark <mark.vaug...@neustar.biz> wrote:

This works if the number of elements remains static. You'd have to run the 
prepare again if the number of elements changes.


Sure. But that's true no matter how you construct your statement to be prepared.



Mark Vaughan
Neustar, Inc. / Lead Consulting Services Consultant, Professional Services
8532 Concord Center Drive, Englewood, CO 80112, USA
Office: +1.303.802.1308  Fax: +1.303.802.1350  /  mark.vaug...@neustar.biz


-Original Message-
From: Paul DuBois [mailto:p...@snake.net]
Sent: Tuesday, April 05, 2016 11:25 AM
To: Bruce Ferrell <bferr...@baywinds.org>
Cc: dbi-users@perl.org
Subject: Re: suppress quoting in prepared sql



On Apr 5, 2016, at 11:55 AM, Bruce Ferrell <bferr...@baywinds.org> wrote:

Ick!

ok, I have to dynamically build the IN clause of the prepare as a
static sql statement


Yep. This is how I do it for a given array of values:

# Create a string of placeholder characters, with one ? character # per element 
in an array of values.

my @values = (1, 2, 3, 4, 5);

my $str = join (",", ("?") x @values);

Then interpolate $str into your query string.



On 4/5/16 9:32 AM, Vaughan, Mark wrote:

 From the DBI documentation
(https://urldefense.proofpoint.com/v2/url?u=https-3A__metacpan.org_p
od_DBI-23Placeholders-2Dand-2DBind-2DValues-29-3A=CwIF-g=MOptNlV
tIETeDALC_lULrw=rwT9R07bCzfhX6apOj8NoPX-TbEkSSLuFkjri49xQ-0=QpMl
4dk0ZSYHx2vhZSJDCeS1tdTQ9Z8GWCyZqgIjc28=2uZZNLLOkgh5xJfTn_SVli361r
ZOaGOrDxGPv_yVwd8=


Also, placeholders can only represent single scalar values. For example, the 
following statement won't work as expected for more than one value:

"SELECT name, age FROM people WHERE name IN (?)"# wrong
"SELECT name, age FROM people WHERE name IN (?,?)"  # two names

You may have to prepare the query each time unless you have a fixed number of 
elements in the IN clause.

HTH,
Mark Vaughan
Neustar, Inc. / Lead Consulting Services Consultant, Professional
Services
8532 Concord Center Drive, Englewood, CO 80112, USA
Office: +1.303.802.1308  Fax: +1.303.802.1350  /
mark.vaug...@neustar.biz


-Original Message-
From: Bruce Ferrell [mailto:bferr...@baywinds.org]
Sent: Tuesday, April 05, 2016 10:24 AM
To: dbi-users@perl.org
Subject: suppress quoting in prepared sql

I'm generating a sql statement like this:

sth  = $mysql_dbh->prepare(
"select sum(column) as columnSum from table where value in ( ? ) and
row_date between cast( ? as date) and cast( ? as date) ");

sth->execute( $ValueIDs ,$week_start_date,$week_end_date);

$ValueIDs is a series of unquoted values:

01161,01162,01262,01147,01034,01125,01125,01017,01125,01278,01204,011
64

When observed at the mysql server, the sql appears as follows:

select sum(column) as columnSum where value in (
'01161,01162,01262,01147,01034,01125,01125,01017,01125,01278,01204,01
164' ) and row_date between cast( '2016-03-29' as date) and cast(
'2016-04-05' as date)

resulting in no data being returned.

When the sql is manually entered as follows:

select sum(column) as columnSum where value in (
01161,01162,01262,01147,01034,01125,01125,01017,01125,01278,01204,011
64 ) and row_date between cast( '2016-03-29' as date) and cast(
'2016-04-05' as date)

The correct values are returned.

How can I suppress the quoting for the IN clause?













Re: (Fwd) DBI Dilemma

2015-08-18 Thread Darren Duncan
Blake, something you need to clarify is what you are querying against, is it the 
SQL database or the .csv file?  Either the .csv thing is a red herring or it 
sounds like you're using DBD::CSV, so which is it?  The solution depends on 
this. -- Darren Duncan


On 2015-08-17 2:42 PM, tim.bu...@pobox.com wrote:

- Forwarded message from Adkins, Blake blake.adk...@intel.com -

Date: Mon, 17 Aug 2015 17:51:41 +
From: Adkins, Blake blake.adk...@intel.com
To: tim.bu...@pobox.com tim.bu...@pobox.com
Subject: DBI Dilemma

Tim,

I've been using your module to enable people in my group to do searches on 
a database that is regularly
backed up as a .csv file. The problem here is with a particular column 
name. Of the 140 columns in the
database, one is named DESC, short for description. This was established 
well before my time at the
company and I believe the name comes from GE who makes the Cimplicity 
product. If I try to do a SELECT
using that column, the script dies, or quietly passes DESC in the column 
header and all the rows. I've
tried to figure out how to get around it without success. Do you have any 
suggestions aside from
renaming the column? (I was thinking along  the lines of escaping the name)

Thanks,

Blake Adkins

- End forwarded message -





Re: (Fwd) DBI Dilemma

2015-08-18 Thread Darren Duncan

Thank you Blake,

So it turns out that a lot of other responders on dbi-users had falsely assumed 
you were using a standard SQL-native DBMS and so a lot of details about eg 
quoting with double-quotes or backticks etc are not necessarily applicable to you.


What actually matters is that your problem is internal to a Perl module itself, 
eg DBD::CSV or a related dependency included with itself or with DBI that 
DBD::CSV uses to parse whatever SQL you're writing.


You need to look at the documentation for what SQL DBD::CSV can handle, or maybe 
it has a bug and you've exposed it, eg that quoting DESC is supposed to work.


One thing I recommend is look for case-sensitivity issues.  In standard SQL, 
quoting an identifier makes it case-sensitive while unquoted is not, so you may 
have to check what case is being compared with.  You may have also found a bug.


If for some reason this problem becomes intractable (it shouldn't be), if you 
can at least read in each CSV file to a Perl array of Perl hashrefs, a typical 
DBI select-all convention, you could try using my Set::Relation::V2 module to do 
SQL-esque munging/queries of those.


-- Darren Duncan

On 2015-08-18 7:21 AM, Adkins, Blake wrote:

Let me do a better job of explaining the situation. We use GE's Cimplicity to monitor and 
control a SCADA system. GE provides an interface to the database but it's very limited. 
The easiest way is to export to a .csv file and operate on that and then import changes. 
My script works on the .csv file using dbi:CSV:f_dir and pointing to different files for 
different projects. The problem comes when I try to include DESC in SELECT or WHERE. I 
have tried 'DESC', DESC and `DESC`. The last two fail at run-time. The first 
one doesn't fail but it returns something like
PT_ID,DESC
PT_1,DESC
PT_2,DESC
...
PT_N,DESC.
I'm using the DBI module in Perl, would this be better with the other modules 
mentioned?

Blake

-Original Message-
From: Darren Duncan [mailto:dar...@darrenduncan.net]
Sent: Tuesday, August 18, 2015 12:19 AM
To: Adkins, Blake; dbi-users@perl.org
Subject: Re: (Fwd) DBI Dilemma

Blake, something you need to clarify is what you are querying against, is it 
the SQL database or the .csv file?  Either the .csv thing is a red herring or 
it sounds like you're using DBD::CSV, so which is it?  The solution depends on 
this. -- Darren Duncan

On 2015-08-17 2:42 PM, tim.bu...@pobox.com wrote:

- Forwarded message from Adkins, Blake blake.adk...@intel.com
-

Date: Mon, 17 Aug 2015 17:51:41 +
From: Adkins, Blake blake.adk...@intel.com
To: tim.bu...@pobox.com tim.bu...@pobox.com
Subject: DBI Dilemma

 Tim,

 I've been using your module to enable people in my group to do searches on 
a database that is regularly
 backed up as a .csv file. The problem here is with a particular column 
name. Of the 140 columns in the
 database, one is named DESC, short for description. This was established 
well before my time at the
 company and I believe the name comes from GE who makes the Cimplicity 
product. If I try to do a SELECT
 using that column, the script dies, or quietly passes DESC in the column 
header and all the rows. I've
 tried to figure out how to get around it without success. Do you have any 
suggestions aside from
 renaming the column? (I was thinking along  the lines of escaping
the name)

 Thanks,

 Blake Adkins

- End forwarded message -







Re: Perl 6 and DBI

2015-02-04 Thread Darren Duncan

On 2015-02-04 4:28 PM, David E. Wheeler wrote:

On Feb 4, 2015, at 4:23 PM, Greg Sabino Mullane g...@turnstep.com wrote:


As you may have heard, Larry Wall gave a speech recently declaring
the goal of releasing Perl 6 this year, 2015. Honestly, there
is little chance of me using Perl 6 until it has a good, working
DBI.


This goes for me, too, and many, many other people too, I’ve little doubt.


And in that case, there would/should be a discussion about what form this DBI 
for Perl 6 should take.


Personally I think one of the most important design changes DBI should make is 
that DBI becomes an API spec that drivers/engines must conform to in order to be 
certifiably compatible.  Part of the API is that user code can query the 
driver/engine to ask, do you implement DBI or do you implement this version 
of the DBI API etc.


DBI should not be an actual code layer that applications have to go through in 
order to talk with the driver/engine, as it is now.


However, there can exist Perl 6 roles or other shared libraries that a 
driver/engine may optionally use to help it implement the API rather than 
rolling its own.


Think of the kind of revolution that PSGI brought by working this way, just 
defining an API or protocol to conform to, rather than being a module that 
everything had to use.  DBI should do the same thing.


The actual details of the API / common interface are an orthogonal matter.  The 
key thing I'm looking for is no mandatory shared code between engines/drivers.


-- Darren Duncan



Re: Perl 6 and DBI

2015-02-04 Thread Darren Duncan

Replying to myself ...

I also believe that what I said, the no mandatory shared code, would also work 
just as well in any language, including a major new Perl 5 version.


-- Darren Duncan

On 2015-02-04 5:19 PM, Darren Duncan wrote:

On 2015-02-04 4:28 PM, David E. Wheeler wrote:

On Feb 4, 2015, at 4:23 PM, Greg Sabino Mullane g...@turnstep.com wrote:


As you may have heard, Larry Wall gave a speech recently declaring
the goal of releasing Perl 6 this year, 2015. Honestly, there
is little chance of me using Perl 6 until it has a good, working
DBI.


This goes for me, too, and many, many other people too, I’ve little doubt.


And in that case, there would/should be a discussion about what form this DBI
for Perl 6 should take.

Personally I think one of the most important design changes DBI should make is
that DBI becomes an API spec that drivers/engines must conform to in order to be
certifiably compatible.  Part of the API is that user code can query the
driver/engine to ask, do you implement DBI or do you implement this version
of the DBI API etc.

DBI should not be an actual code layer that applications have to go through in
order to talk with the driver/engine, as it is now.

However, there can exist Perl 6 roles or other shared libraries that a
driver/engine may optionally use to help it implement the API rather than
rolling its own.

Think of the kind of revolution that PSGI brought by working this way, just
defining an API or protocol to conform to, rather than being a module that
everything had to use.  DBI should do the same thing.

The actual details of the API / common interface are an orthogonal matter.  The
key thing I'm looking for is no mandatory shared code between engines/drivers.

-- Darren Duncan




Re: Perl 6 and DBI

2015-02-04 Thread Darren Duncan
I think it should be discussed, however I suggest dbi_dev is a more appropriate 
place than dbi_users.  I can repost what I said there. -- Darren Duncan


On 2015-02-04 7:52 PM, David Nicol wrote:


Does this mean the floor is open for brainstorming? I'd like to see more
transparent integration, so p6+dbi would be like pl/sql or pro*C or whatever
that language Peoplesoft scripts used to be in that I was working with when I
wrote DBIx::bind_param_inline.

http://perlbuzz.com/2008/12/database-access-in-perl-6-is-coming-along-nicely.html

http://www.mail-archive.com/dbdi-dev@perl.org/maillist.html
doesn't have anything new since 2011.


And in that case, there would/should be a discussion about what form
this DBI
for Perl 6 should take.





Re: Perl 6 and DBI

2015-02-04 Thread Darren Duncan
I have now just copied this thread to dbi-dev in a single post with what 
everyone said so far.  I recommend continuing any discussion about what form 
DBI/etc for Perl 6 should take over there. -- Darren Duncan


On 2015-02-04 7:59 PM, Darren Duncan wrote:

I think it should be discussed, however I suggest dbi_dev is a more appropriate
place than dbi_users.  I can repost what I said there. -- Darren Duncan

On 2015-02-04 7:52 PM, David Nicol wrote:


Does this mean the floor is open for brainstorming? I'd like to see more
transparent integration, so p6+dbi would be like pl/sql or pro*C or whatever
that language Peoplesoft scripts used to be in that I was working with when I
wrote DBIx::bind_param_inline.

http://perlbuzz.com/2008/12/database-access-in-perl-6-is-coming-along-nicely.html

http://www.mail-archive.com/dbdi-dev@perl.org/maillist.html
doesn't have anything new since 2011.


And in that case, there would/should be a discussion about what form
this DBI
for Perl 6 should take.








Re: Escaping placeholders, take 2

2014-12-28 Thread Darren Duncan
Thank you for this post Tim, it seemed to lay out the issues well and make a lot 
of things clear.


I'm now inclined to support your backslash-escape proposal at the DBI driver 
level in principle.


(I also agree that the doubling method is nasty.)

Or alternately I suggest variation on that proposal that brings in shades of the 
vendor escape clauses.


I suggest a variant where instead of a single backslash character indicating an 
escape, we have some multi-character thing to indicate it, ideally involving 
delimiters so it is more clear on how far the effect of the new escape feature 
is supposed to go.


For example, using \{?} rather than \? or \{:foo} rather than \:foo.

One benefit of that is if you have some SQL that contains ? or : numerous times, 
you only need to surround the whole fragment with \{} and not individually 
escape each occurrence, making for neater code.


As to dealing with say literal { or } in the SQL as I could see reasonably 
happening, the quoting mechanism could be made more generic like Perl's q/etc, 
so for example it would take the form \X...X where whatever character appears 
after the \ is what is matched, and it could be a multiplicity if necessary 
within reason, eg \{{{...}}} would just work.  For that matter, heredocs or the 
quoted printable feature you can see in email messages or such, eg you have 
\foo...foo or some such.


I'm speaking in principle here, I'm not proposing a specific feature set, but 
both q as well as Perl 6's related quoting mechanism is useful for guidance, 
and I think something involving delimiters is best.


But if some of that sounds unduly complicated, I have a better idea.

I propose that the DBI include an API for users to tell the driver what possible 
escape delimiters they are using.  For example, doable at least at a statement 
level (and optionally on a connection/etc level for defaulting if that makes sense).


The API could involve an 'attr' given when preparing a SQL statement or other 
appropriate places.


  placeholder_escape_delimiters = [ '\{','}' ]
  placeholder_escape_delimiters = [ '\{','}','\[',']' ]
  placeholder_escape_delimiters = [ '\{{{','}}}' ]
  placeholder_escape_delimiters = [ '{{{','}}}' ]

In this way, the backslash is no longer special, or necessary, though I 
anticipate it would often still be used for the mnemonics.


Rather, when the driver is parsing the SQL for placeholders, if it encounters 
any left delimiter strings, if will leave the following SQL unaltered until it 
encounters the corresponding right delimiter string, and then it looks for 
placeholders again.


(As to numbered placeholders, which are effectively a special case of named 
placeholders, not being directly composable in SQL::Abstract, I see that as 
being a problem itself.  It would be a great help to developers in principle if 
the native way for working with parameters was named rather than positionally. 
However, that is really a separate matter to deal with and I think it is a good 
idea for Tim's proposal in some form to happen regardless of dealing with this 
separate matter.)


-- Darren Duncan



Re: Escaping placeholders

2014-12-21 Thread Darren Duncan

I agree with Greg's counter-proposal, from which I derive my own words here.

1.  I propose that there be no modification to the DBI spec related to new 
escaping whether optional or not, so leave things the way they are here, SQL 
continues to be interpreted the way it long has by default.


2.  When users want to use operators in PostgreSQL that contain literal ? in 
their names, then they enable DBD::Pg's pg_placeholder_dollaronly so that ? are 
no longer treated as placeholders.  Likewise, pg_placeholder_nocolons can be 
enabled when they don't want literal : to indicate a placeholder either.  Users 
would either do this directly if they're using DBI/DBD::Pg directly, or 
indirectly using their over-top framework of choice.  When users aren't using 
the ? operators et al, they can leave things alone which will work as normal.


3.  SQL::Abstract et al, those tools external to DBI/DBDs, are the correct and 
proper places to modify where users of said want to use the operators with ? 
names and such.  These tools already have special knowledge of individual DBMS 
systems to work with them effectively, and the ? operators is just one more of 
those things.  The users of said tools may have to flip a configuration switch 
possibly so $1 etc are used behind the scenes, if necessary, but that's okay 
because the use of ? operators only happens when the users choose to make a 
change to use them anyway.


In summary, now is not the time or place to be introducing backslashing doubled 
or otherwise in DBI such as discussed, that's a poor solution and its better to 
save such risky/etc changes for when there's a more compelling case for them.


I should also mention I feel it is perfectly reasonable for each DBMS to have 
operators composed of any characters they want where doing so makes sense within 
the context of the SQL/etc itself.  See also that Perl itself has both ? and : 
and etc as operator names, Perl 6 even more so, and I don't see anyone thinking 
that's a bad idea.  So I have no problem with PostgreSQL having ? in operator 
names such as it did.  Its not like the SQL standard reserves ? or whatever for 
prepared statement parameters, that's defined to be implementation dependent I 
believe (WD 9075-2:200w(E) 20.6 prepare statement).


-- Darren Duncan

On 2014-12-21 7:17 AM, Greg Sabino Mullane (the tenth man) wrote:

Tim Bunce wrote:


For code not using DBIx::Class the pg_placeholder_dollaronly attribute
might work, see https://metacpan.org/pod/DBD::Pg#Placeholders


Yes, this is the current canonical solution. It's a real shame
that ? was used as an operator, but that horse has left the barn.


For code using DBIx::Class the problem is more tricky. I'm pretty sure
that SQL::Abstract and thus DBIx::Class only support question mark
placeholders. That means it probably impossible to use expressions
containing a question mark operator with SQL::Abstract/DBIx::Class.
(Though I'd be delighted to be proven wrong.)

So I think the DBI spec for placeholders needs to be extended to allow a
way to 'escape' a question mark that the driver would otherwise treat as
a placeholder.

The obvious and natural approach would be to use a backslash before a
question mark. The backslash would be removed by the driver before the
statement is passed to the backend.


I'm going to play devil's advocate a bit here. There are some problems with
this approach. First, it will require that the user know if the underlying
DBD supports backslashes. Which likely means that SQL::Abstract and/or
DBIx::Class will need to know as well. (Unless they expose the DBD directly
to the user, which ruins the point a bit). Since we will thus need to patch
those other modules, so why not fix them to do the right thing? (see below
for a counter proposal).

Another problem is that we have now extended the SQL syntax for our own
purposes. While one could argue that placeholders already do so, their
current use is consistent, widespread (e.g. not just DBI), and in part
used by the underlying RDBMSs themselves (e.g. Postgres uses dollar-number
placeholders). So we will have both escaped and unescaped versions of SQL
floating around, subject to the whims of whether or not your particular
DBD supports it (and in which version). All of which seems like an awful
lot of work to fix SQL::Abstract. Why bother patching every DBD in
existence when we can simply patch SQL::Abstract?

Which leads to my counter-proposal: have SQL::Abstract accept dollar-number
placeholders. It can pass pg_placeholder_dollaronly down the stack as
needed. This neatly puts the onus back onto the frameworks, rather than
having the DBDs selectively remove backslashes before passing to the
RDBMS (ick). DBIx::Class and friends could even map dollar signs back to
a format supported by the underlying DBDs, if they don't support dollar
signs (that is one of their grand purposes after all - abstracting out
details and taking care of things in the background).




Re: Table-Valued Parameters to MS SQL Server stored procedure

2013-12-30 Thread Darren Duncan

On 2013.12.30 5:23 AM, Jürgen Wenzel wrote:

Using temporary tables, or any other database workaround, isn't an option for
me, so unless there's a way to use TVPs with stored procedures I'll probably
have to abandon Perl (which, except for being extremely sad for me and very
satisfying for the non-Perl believers, will be extremely time consuming).


So do you not have any influence on things stored in the database?  How many 
stored procedures are we talking about with PVPs?


While a kludge, one thing you could try is having the SQL the Perl invokes not 
just invoke the stored procedure normally but also inline a TABLE value literal, 
eg I think rather than calling foo(?) you could say foo(VALUES 
(?,?),(?,?)...) though this wouldn't scale very well, it may handle your 
thousands of rows.


Does your TPV stored procedure work piecemeal?  Eg, if you were normally calling 
it with a certain 100 rows, could you call it 5 times with 20 rows each and the 
proper behavior would happen?  Or are there constraints such that it won't work 
unless the 100 are provided all at once?


If you can piecemeal, that should simplify things for you, at the Perl end you 
can call it in a loop with say 1 row at a time with a 1-row TABLE literal.


-- Darren Duncan



Re: Table-Valued Parameters to MS SQL Server stored procedure

2013-12-29 Thread Darren Duncan
Something I'm wondering, in the general case, are the table values being passed 
on these parameters large or small?  For example, is it a list of line items for 
an invoice (small) or could it say be millions of records in one call?  I say 
this regardless of whether these values are being passed in or out or both, 
though you can answer that too.


If the tables are large then we probably want the implementation such that the 
client-side bind variable on that parameter would be like a cursor handle so you 
can pull or push rows one at a time; if the tables are small than an 
implementation that behind the scenes serializes the table into a scalar value 
for transit may do the job.


As to the actual answer to your question, what support exists now, I don't know; 
I'm just saying how it could be made to work.


Note that an ordinary SELECT or INSERT statement could be generalized as a 
procedure with a table-valued OUT or IN parameter respectively, and so an 
appropriate DBI design could handle things as if a SELECT/INSERT were just 
special cases of the procedure.  DBI's ? bind params and stored procedure params 
would be direct analogies to each other, or say that the former is an example of 
the latter.


-- Darren Duncan

On 2013.12.29 2:20 PM, Jürgen Wenzel wrote:

Hello,

I'm adapting a Windows perl program to work with a new MS SQL Server database
and it involves making calls to stored procedures that takes Table-Valued
Parameters. Is this possible with DBD::ODBC or any other DBI? Or at all with
(strawberry) perl?

Would really appreciate some help -- answers and perhaps even a short example --
since trying to figure this out is really starting to wear on me.

Thanks in advance!

JW





Re: cross database queries?

2013-06-26 Thread Darren Duncan

On 2013.06.26 12:47 PM, John R Pierce wrote:

On 6/26/2013 12:29 PM, Andrew Snyder wrote:

Thank you all for your replies.  I am indeed talking about working with
multiple back ends (oracle, mysql, or any valid DBI module) in the same query.


thats never going to happen.   DBI is not a sql processor, its just an
interface, the SQL is passed directly to the database server you're connected to


Never say never!  I'm working on a Perl-native engine right now, and it will not 
only be functionally complete internally but also has the option to federate to 
multiple back-ends, so then you can do what you want.



some databases have support for 'foreign data wrappers', where you can
explicitly setup a connection to another database then use tables on that
foreign database, so if you connected to a DB like this and got the FDW stuff
all working, you could maybe achieve what you want, but its almost never optimal
as the planner for the 'primary' database has no way to optimize JOIN operations
etc involving other databases.


In the short term that might be a good bet though, try using an existing SQL 
database's FDW capabilities.


-- Darren Duncan



Re: Creating new database handles with mod_perl

2012-05-19 Thread Darren Duncan
Instead of using Apache::DBI directly, I recommend using the CPAN module 
http://search.cpan.org/dist/DBIx-Connector/ instead, which is focused on 
handling your use case effectively. -- Darren Duncan


Schultz, Len wrote:

I've run into an issue when stress testing mod_perl that the database
connections are going away. I suspect that processes are sharing
database connections, causing the issue.

But I've followed all instructions for Apache::DBI, and can't figure
this out.

I'm making the connections in the child process and not in startup.pl.
But when I examine the $dbh returned by each child from the
DBI-connnect, the address is the same for every httpd process. Firstly,
if this is working properly and reconnecting for each process, should
the address returned by DBI-connect be different for each child
process? I've assumed so, but as far as I can tell the core C code in
DBI (dbih_setup_handle) is managing this and is returning the same
address. So maybe I'm not understanding what it means to reconnect in
the child.

Am I reconnecting properly if the $dbh handles are the same?


Re: Maintainers of uwinnipeg repository?

2011-05-01 Thread Darren Duncan

Praveen wrote:

Does anybody now who is the current maintainer of the winnipeg
university repository ( http://theoryx5.uwinnipeg.ca/ppms/ )? Randy
Kobes is the current maintainer. But seems like Randy is no longer the
maintainer, I have tried sending mails to get the latest DBD::DB2
driver (1.80) for windows, but have received no response.


Randy Kobes passed away last September I believe, so if he is still listed 
somewhere as a current maintainer then that is a mistake. -- Darren Duncan


Re: anti-join with SQL::Statement?

2010-10-05 Thread Darren Duncan

Reinier Post wrote:

2 in one, 4 in all.
Removing them produces another error during 'make test':

  t/Set_Relation_51_Database_in_Depth_Example.t .. 1/? # is_identical
  # restriction
  # projection
  Argument city isn't numeric in numeric ne (!=) at 
/var/cache/cpan/build/Set-Relation-0.12.7-8CX6n6/blib/lib/Set/Relation/V1.pm line 1844


The offending line is this, abbreviated:

confess qq{$rtn_nm(): Bad $arg_nm arg;}
. q{ it specifies a list of}
. q{ attr names with at least one duplicated name.}
if (uniq @{$atnms}) != @{$atnms};

What is supposed to happen here is that the presence of the scalar operator != 
will put both of its arguments into scalar context and so it should be comparing 
the count of elements of the lists on each side.


Your Perl seems to be behaving strangely in that it is instead flattening the 
lists and attempting a comparison on individual list elements.


That would suggest a bug in your Perl or some other strange thing about your 
system.

At least CPAN Testers is showing 127 passes and zero fails for Set::Relation 
0.12.7, for many Perls from 5.8.x, 5.10.x, 5.12.x; only one of those is Cygwin, 
and it is Perl 5.8.8, which passes.


Maybe as an experiment you could try putting the word scalar before both 
arguments on that line and see if the tests get further.


General question:  Is Perl supposed to take the arguments on both sides of a != 
in scalar context or was I relying on undocumented behavior that just happened 
to be the case everywhere else?  I thought it was documented actually.



Perl 5.10.0 is also known to have some significant bugs in it; I
would use a newer version anyway if you can, 5.12.2 being the best.


Thanks.  I'll try that when I have time (it takes a lot of time to
recompile the CPAN modules and some don't take 'yes' for an answer).


The nature of your seeming unique problems with my module suggests you would 
have problems with various other CPAN modules too.


Since you're using Windows, I'd suggest trying Strawberry Perl and see if that 
works.  It is a Windows-savvy Perl distro designed to work just like Unix Perl.



I've always been puzzled by the description of relational algebra
as an implementation language for relational calculus or SQL.
I mostly think in algebra, it feels more natural to me.


Well relational algebra and calculus are actually all defined in maths and 
logics etc, and SQL is a bastardized quasi-implementation of them, so people who 
think SQL equals relational (it doesn't) would have a rather distorted 
impression of the latter.  I can understand some of what you're thinking.


Set::Relation is part of a wider body of work that I'm making to provide the 
actual relational model to people that they can use in everyday work.


I invite you to contact me privately to further discuss these matters.

-- Darren Duncan


Re: anti-join with SQL::Statement?

2010-10-04 Thread Darren Duncan

Reinier Post wrote:

  Dear fellow DBI users,

Recently I discovered (by asking on #perl, as usual) that SQL::Statement
supports joins.  I like this, because it allows me to compute the
relational join of two CSV tables and store the result as a new CSV table.

Actually, I'd like to apply *all* relational algebra operations.
So the next thing I tried was the anti-join (see

  http://en.wikipedia.org/wiki/Relational_algebra#Antijoin

) and here I failed: it does not appear to be supported.
SQL offers various ways to express it, but none of them I can think
of are supported in SQL::Statement:

 - no EXCEPT (or UNION, for that matter)
 - no NOT IN (or nested selects in general)
 - no functions as column specifiers (to be exact: only in the parser)

Although I developed a workaround (and #perl told me it was probably
the best I could do) this left me with two questions:

 + Have I overlooked a way of expressing the anti-join in SQL::Statement?

 + Is there any interest (besides my own) in adding such functionality
   to SQL::Statement if it isn't there?


I have an alternative solution to suggest:

You can use my Set::Relation module on CPAN.

  http://search.cpan.org/dist/Set-Relation/

That module will directly give you all of the relational algebra directly and in 
an easy to use form.


It just provides in memory objects so you'd use it together with your CSV module 
like this:


1.  Populate 2 lexical variables $rs1, $rs2 to contain your rowsets as DBI would 
return them, as an array of hashes say; each one the contents of your CSV files.


2.  Here's some example code that assumes the columns you match on have the same 
names and all other columns have unique names:


use Set::Relation::V2;
sub relation { return Set::Relation::V2-new( @_ ); }

my $r1 = relation( $rs1 );
my $r2 = relation( $rs2 );

my $r3 = $r1-antijoin( $r2 );

my $rs3 = $r3-members();

3.  Then $rs3 has the result rowsets in array-of-hashes form you can then use 
your CSV module to write a file with.


If your source files don't have appropriately named columns for antijoin() to do 
the right thing then use a rename() on either or both arguments first.


This module should work and has been tested to some degree, but it is officially 
alpha quality so if something doesn't work then let me know and I'll fix it.


Limitations are that all your rowsets have to fit in memory at once and it would 
probably be slower than a native SQL::Statement solution.  But for batch tasks 
or quick-get-it-done tasks that shouldn't be a problem.  Requires Perl 5.8+.


-- Darren Duncan


Re: anti-join with SQL::Statement?

2010-10-04 Thread Darren Duncan
Reiner, I don't know if this was your workaround, but here's another solution 
that just uses SQL::Statement ...


Ludwig, Michael wrote:

So the next thing I tried was the anti-join (see

  http://en.wikipedia.org/wiki/Relational_algebra#Antijoin


First time I hear of this antijoin.

The example given on the Wiki page doesn't make much sense
to me. Isn't is simply an OUTER JOIN where you're looking
for NULL on the right-hand side?

SELECT E.Name, D.DeptName
FROM Employee AS E
LEFT JOIN Dept AS D ON E.DeptName = D.DeptName
WHERE D.DeptName IS NULL


An antijoin, aka anti-semijoin, aka semidifference will filter one rowset 
to retain just the rows that don't have matching rows in the other one.  The 
closest SQL analogy is


SELECT ... FROM source WHERE ... NOT IN (SELECT ... FROM filter)

But yes, Michael's solution is basically correct, and here's a closer variant of 
what antijoin does:


  SELECT E.*
  FROM Employee AS E
LEFT JOIN Dept AS D USING (DeptName)
  WHERE D.DeptName IS NULL

The key difference here is that the result of an antijoin is all of the columns 
of the source table and only those.


Reiner, you're probably better off to just use this SQL solution if you can.

Set::Relation is superfluous for this task actually.

-- Darren Duncan


Re: anti-join with SQL::Statement?

2010-10-04 Thread Darren Duncan

Reinier Post wrote:

  SELECT E.*
  FROM Employee AS E
LEFT JOIN Dept AS D USING (DeptName)
  WHERE D.DeptName IS NULL


Thanks, I didn't know about USING.  Works fine, too:


That's good to hear.


Set::Relation is superfluous for this task actually.


I'd like to try it anyway,
but it won't install on my (Cygwin 1.7) system:

#   Failed test 'use Set::Relation;'
#   at t/Set_Relation_00_Compile.t line 9.
# Tried to use 'Set::Relation'.
# Error:  Can't use an undefined value as a HASH reference at
# /usr/lib/perl5/site_perl/5.10/namespace/autoclean.pm


That looks more like a namespace::autoclean problem, actually.  Are you using 
the latest version?


Strictly speaking, you could remove all references to namespace::autoclean and 
Set::Relation would still work, I believe; namespace::autoclean is more of a 
housekeeping utility.  There's just a line each in 3 files I believe.


Perl 5.10.0 is also known to have some significant bugs in it; I would use a 
newer version anyway if you can, 5.12.2 being the best.


But try removing the use namespace::autoclean; from the source and see if the 
problem goes away, if updating to the newest first doesn't work.


-- Darren Duncan


test PostgreSQL 9.0 rc1

2010-08-31 Thread Darren Duncan

All,

The next major release of Postgres is imminent, 9.0, but first a release 
candidate has come out today, and anyone who can should download and test it. 
Details in the forwarded email below.


(Also, coincidentally, a release candidate for Perl 5.12.2 has also come out 
today on CPAN and needs testing, this being a minor update.)


-- Darren Duncan

 Original Message 
Subject: [ANNOUNCE] PostgreSQL 9.0 Release Candidate 1
Date: Tue, 31 Aug 2010 09:05:01 -0700
From: Josh Berkus j...@postgresql.org
To: pgsql-annou...@postgresql.org

The first release candidate for PostgreSQL 9.0 is now available. Please
download and test immediately so that we can move rapidly towards final
release. All known bugs should be fixed, so users should promptly report
any bugs which they find.

Note that, due to a system catalog change, an initdb and database reload
will be required for upgrading from 9.0 beta versions. We encourage
users to use this opportunity to test pg_upgrade. Please report your
results.

If you are able to help with testing, please see the testing page:
http://wiki.postgresql.org/wiki/HowToBetaTest

No changes in commands, interfaces or APIs are expected between this
release candidate and the final version. Applications which will deploy
on 9.0 can and should test against 9.0rc1. Depending on bug reports,
there may or may not be more release candidates before the final release.

Source code, as well as binary installers for many platforms, is
available from the PostgreSQL Web Site:
* Source:
  http://www.postgresql.org/ftp/source/v9.0rc1
* One-Click Installer, including Win64 binaries:
  http://www.enterprisedb.com/products/pgdownload.do
* Binaries for other platforms:
  http://www.postgresql.org/ftp/binary/v9.0rc1
* Release Notes:
  http://developer.postgresql.org/pgdocs/postgres/release-9-0.html
* Participating in Testing:
  http://www.postgresql.org/developer/beta

---(end of broadcast)---
-To unsubscribe from this list, send an email to:

  pgsql-announce-unsubscr...@postgresql.org



Re: How to use perl dbi to create a database

2010-06-29 Thread Darren Duncan

Owen wrote:

On Mon, 28 Jun 2010 04:51:45 -0400
Sharma, Sumit sumit.sha...@aeroflex.com wrote:


I am trying to use Perl to identify if a given database exists or not
and if it doesn't create the database and then connect to it.

Is there any way using Perl DBI to first identify whether a given
database exists or not if it doesn't create the database?


Well I am not sure what you are trying to do, but lets say your
database has the name my_data.db

Check that my_data.db exists

   if (-e my_data.db){connect} else {connect and create}

See how you go from there


Creating a database is one of the tasks that varies greatly depending on what 
DBMS you are using, and the SQL standard also punts and leaves it to the 
implementations.


If your DBMS is SQLite, then your database is a file and you can use your 
standard -e file test to see if it exists; if it doesn't, then simply connect() 
using DBI with the file name as per an existing one, and SQLite defaults to 
creating said nonexisting database.


If your DBMS is Postgres, what you do depends on whether or not there already 
exists a database cluster and associated server to connect to.  If there is, 
then you can connect() to it using DBI as the 'postgres' user (or a default 
user) and then use a SQL command to test the existence of or create a database 
in the cluster.  If there is a cluster but not a server, you could invoke the 
'postgres' program to start a server to use that cluster as its data, then 
procede to the previous sentence.  Or if there is no cluster, then you could 
invoke the 'initdb' program to create one, then start the server, and connect, 
etc.  All of this you can do within Perl of course.


If your DBMS is something else, then I don't know the answer offhand.

-- Darren Duncan


ANNOUNCE - Muldis D version 0.129.1

2010-05-19 Thread Darren Duncan
 commercial support.

And http://github.com/muldis/ is its main public GIT version control repository.

Thank you in advance for any interest in or support for this project that you
can give.  Any kind of support is welcome, from trying to update your own
projects to use Muldis D, or from contributing your time and expertise to
improving Muldis D or its implementations or ancillary projects, or promoting
this project in various media and forums.  Support is welcome in providing
significant financial sponsorship towards my further work, in which case you
have more of a say in its direction and priorities.  But mainly I want to see it
get used to enrich projects and their users and developers.

This project and ancillary projects are a serious endeavor that I intend to
commercially support over the long term, and others can do likewise.

Good day. -- Darren Duncan




Re: spammer on dbi-users

2010-03-25 Thread Darren Duncan
Another option is to moderate the list, maybe by putting all new subscribers on 
moderation by default and then taking them off after their first post, or 
something.  I seem to recall that many other Perl lists are moderated, but that 
it is done so well you hardly notice.  I've basically never seen a spam on a 
Perl list save for this handful. -- Darren Duncan


Neil Beddoe wrote:

They'll only use another address if you do.  There are a lot of these 
originating from Hotmail accounts at the moment.

-Original Message-
From: jeff [mailto:j...@roqc.no] 
Sent: 25 March 2010 16:57

To: dbi-users@perl.org
Subject: spammer on dbi-users

Who's ever the dbi-users group monitor, please shut this annoying person
off:

Kenneth Webber mp34...@hotmail.com




Re: Which SQLite DBD To Use

2010-03-18 Thread Darren Duncan

Will Rutherdale (rutherw) wrote:

-Original Message-
From: Peter J. Holzer [mailto:h...@wsr.ac.at] 
On 2010-03-16 17:14:32 -0500, Will Rutherdale (rutherw) wrote:

Is DBD-SQLite the right one to get?

Yes.

No. The drivers includes an SQLite version and will ignore any other
version which may already be installed on the system.


Interesting.  In embedded systems there is often a lot of fighting over
flash space, and if people are using SQLite then it has already been
installed somewhere.

Is there any way to get the driver to use the existing SQLite and to
avoid the cost of installing the extra one that comes with DBD-SQLite?


Yes, you can.

Look at the source of the Makefile.PL, 
http://cpansearch.perl.org/src/ADAMK/DBD-SQLite-1.30_01/Makefile.PL , for 
information on how to do that.


The ability to use a system-installed SQLite is present, but disabled by default 
for casual users, because you could be more likely to run into problems due to 
potential version incompatibilities and so forth or other build system complexities.


You enable it by setting the USE_LOCAL_SQLITE and optionally the SQLITE_LOCATION 
environment variables.  Moreover, you have to change the indicated if(0) to 
if(1) in the Makefile.PL.


-- Darren Duncan


ANN - DBD::SQLite 1.27 - test it!

2009-11-23 Thread Darren Duncan

All,

I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a Perl DBI
Driver) version 1.27 has been released on CPAN (by Adam Kennedy).

  http://search.cpan.org/~adamk/DBD-SQLite-1.27/

This release is the newest one intended for production use and has no known 
serious bugs.  The previous version for production was 1.25, which was released 
on 2009 April 23.


There were many improvements and changes between these 2 versions, and many bugs 
fixed; see http://cpansearch.perl.org/src/ADAMK/DBD-SQLite-1.27/Changes for a 
complete list.


Just a small number of these are, since 1.25:

- The bundled SQLite is version is now 3.6.20, up from 3.6.13 (both were 
the Amalgamation).


- Foreign key constraints are now supported and enforceable by SQLite. 
However, to aid backwards compatibility and give you a transition period to 
ensure your applications work with them, this feature is not enabled by default. 
 You enable (or disable) foreign key enforcement by issuing a pragma.


- Read the Changes file linked above, especially the sections that say 
changes which may possibly break your old applications.


As usual, testing of this release is appreciated and recommended.

If you use referential stuff in your schema (which SQLite ignores by default 
now) should do extensive testing to ensure that they will work when you issue 
PRAGMA foreign_keys = ON.  It is anticipated that foreign keys will be enabled 
by default within 1 or 2 production releases, and you will have to cope with it.


If you want in to DBD::SQLite development, then join the following email/IRC
forums which MST created (the mailing list, I am administrating):

  http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite

  #dbd-sqlite on irc.perl.org

And the canonical version control is at:

  http://svn.ali.as/cpan/trunk/DBD-SQLite/

Patches welcome.  Ideas welcome.  Testing welcome.  Whining is also welcome!

If you feel that a bug you find is in SQLite itself rather than the Perl DBI
driver for it, the main users email forum for SQLite in general is at:

  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

... where you can report it as an appropriate list post (the SQLite issue
tracking system is no longer updateable by the public; posting in the list can
cause an update there by a registered SQLite developer).

Please do not reply to me directly with your responses.  Instead send them to
the forums or file with RT as is appropriate.

Thank you. -- Darren Duncan



Re: DBI for Perl 5.6.1

2009-11-05 Thread Darren Duncan

Patrick Mischler wrote:

Which version of DBI (and DBD::mysql) does work with Perl 5.6.1?


Look in the Changes files for each of those modules; it would say when a higher 
Perl started being required.  For example, DBI 1.607 was the first version to 
require Perl 5.8.1, so DBI 1.605 would probably support Perl 5.6. -- Darren Duncan


test DBD::SQLite 1.26_06 please

2009-10-28 Thread Darren Duncan

All,

I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a Perl DBI
Driver) version 1.26_06 has been released on CPAN (by Adam Kennedy).

  http://search.cpan.org/~adamk/DBD-SQLite-1.26_06/

TESTING NEEDED!

Please bash the hell out of the latest DBD::SQLite and report any outstanding
bugs on RT.  Test your dependent or compatible projects with it, which includes
any DBMS-wrapping or object persistence modules, and applications.

This developer release includes both several changes which *might break your 
applications* if not accounted for, and it has a lot of code refactoring.


This release should also fix the known problem with full-text search (FTS3) that 
was reported in the 1.26_05 release but had existed in many prior versions; the 
included test for that problem now passes.


From the Changes file:

*** CHANGES THAT MAY POSSIBLY BREAK YOUR OLD APPLICATIONS ***
- Removed undocumented (and most probably unused) reset method
  from a statement handle (which was only accessible via func().)
  Simply use $sth-finish instead. (ISHIGAKI)
- Now DBD::SQLite supports foreign key constraints by default.
  Long-ignored foreign keys (typically written for other DB
  engines) will start working. If you don't want this feature,
  issue a pragma to disable foreign keys. (ISHIGAKI)
- Renamed unicode attribute to sqlite_unicode for integrity.
  Old unicode attribute is still accessible but will be
  deprecated in the near future. (ISHIGAKI)

- You can see file/line info while tracing even if you compile
  with a non-gcc compiler. (ISHIGAKI)
- Major code refactoring. (ISHIGAKI)
- Pod reorganized, and some of the missing bits (including
  pragma) are added. (ISHIGAKI)

The bundled SQLite version (3.6.19) is unchanged from last time.

If you want in to DBD::SQLite development, then join the following email/IRC
forums which MST created (the mailing list, I am administrating):

  http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite

  #dbd-sqlite on irc.perl.org

And the canonical version control is at:

  http://svn.ali.as/cpan/trunk/DBD-SQLite/

Patches welcome.  Ideas welcome.  Testing welcome.  Whining is also welcome!

If you feel that a bug you find is in SQLite itself rather than the Perl DBI
driver for it, the main users email forum for SQLite in general is at:

  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

... where you can report it as an appropriate list post (the SQLite issue
tracking system is no longer updateable by the public; posting in the list can
cause an update there by a registered SQLite developer).

Please do not reply to me directly with your responses.  Instead send them to
the forums or file with RT as is appropriate.

Thank you. -- Darren Duncan



test DBD::SQLite 1.26_05 - foreign keys!

2009-10-15 Thread Darren Duncan

All,

I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a Perl DBI 
Driver) version 1.26_05 has been released on CPAN (by Adam Kennedy).


  http://search.cpan.org/~adamk/DBD-SQLite-1.26_05/

This developer release bundles the brand-new SQLite version 3.6.19, which adds 
support for enforcing SQL foreign keys.  See http://sqlite.org/foreignkeys.html 
for the details of the foreign key support that SQLite now has.


Also be sure to look at the section http://sqlite.org/foreignkeys.html#fk_enable 
, because you have to enable a pragma on each connect to use the foreign keys 
feature; it isn't yet on by default for backwards compatibility purposes.


As I imagine many of you have been pining away for SQLite to support this 
feature for a long while, you'll want to dig in right away.


TESTING NEEDED!

Please bash the hell out of the latest DBD::SQLite and report any outstanding
bugs on RT.  Test your dependent or compatible projects with it, which includes
any DBMS-wrapping or object persistence modules, and applications.

And especially try actually using foreign keys with SQLite.

As the official release announcement says:  This release has been extensively 
tested (we still have 100% branch test coverage).  [The SQLite developers] 
consider this release to be production ready.  Nevertheless, testing can only 
prove the presence of bugs, not their absence.  So if you encounter problems, 
please let us know.


See also http://www.sqlite.org/changes.html for a list of everything else that 
changed in SQLite itself over the last few months.


If you want in to DBD::SQLite development, then join the following email/IRC
forums which MST created (the mailing list, I am administrating):

  http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite

  #dbd-sqlite on irc.perl.org

And the canonical version control is at:

  http://svn.ali.as/cpan/trunk/DBD-SQLite/

Patches welcome.  Ideas welcome.  Testing welcome.  Whining to /dev/null.

If you feel that a bug you find is in SQLite itself rather than the Perl DBI 
driver for it, the main users email forum for SQLite in general is at:


  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

... where you can report it as an appropriate list post (the SQLite issue 
tracking system is no longer updateable by the public; posting in the list can 
cause an update there by a registered SQLite developer).


Please do not reply to me directly with your responses.  Instead send them to
the forums or file with RT as is appropriate.

Thank you. -- Darren Duncan

P.S.  DBD::SQLite has at least 1 known bug, also in version 1.25, with regard to 
full-text search (FTS3); there is an included new failing test, which currently 
is set to skip so the CPAN testers don't issue fails, but the issue behind it 
should hopefully be fixed before the next DBD::SQLite release.  We decided that 
shipping DBD::SQLite now with the skipping test was preferable to waiting for 
that fix so you could get the new foreign keys feature the soonest.




ANN : new SQL builder module starting development

2009-08-23 Thread Darren Duncan
, though if contributors insist otherwise I can 
instead make it the Artistic v2+, which is the same license as Parrot and 
Rakudo, and which is the modern 'same version as Perl' license.


Thank you. -- Darren Duncan



Re: Perl 6 DBI API ideas

2009-07-31 Thread Darren Duncan
 $iter = $dbh.get( :$sql );

while ( my $rec = $iter.next ) {
my $foo = $rec.foo;
my $bar = $rec.value('bar');
my $q1  = $rec.quux; # exception as there is no quux selected
my $q2  = $rec.value('quux'); # another exception
}


That might work.

Personally I think of the whole concept of duplicate or missing column names to 
be a terrible idea, but that's more of a SQL problem than a DBI problem; SQL 
should be more clean in that regard but it isn't.



I added something like this in my Text::RecordParser module:

http://search.cpan.org/dist/Text-RecordParser/lib/Text/RecordParser/Object.pm 


That's all I can think of for now.

ky


-- Darren Duncan


Re: Support for XA

2009-06-18 Thread Darren Duncan

Tim Bunce wrote:

On Wed, Jun 17, 2009 at 01:53:36PM -0500, David Dooling wrote:

If any of the SQL fails, then all the DB transactions are rolled back.


If two databases are being used, and the commit to the first succeeds
and the commit to the second fails, how does it rollback the first commit?


Depending on the reason for failure, it could still be done when using something 
like a two-phase commit protocol.  This isn't a 100% solution, but for the 
situations where it works you don't need to undo the succeeding databases. -- 
Darren Duncan


ANN - DBD::SQLite version 1.24_01 - amalgamation

2009-04-22 Thread Darren Duncan

All,

I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a DBI Driver)
version 1.24_01 has been released on CPAN (by Adam Kennedy).

  http://search.cpan.org/~adamk/DBD-SQLite-1.24_01/

The main feature of this release is that now DBD::SQLite also uses amalgamated 
source recommended at sqlite.org, meaning that the entire C source code of the 
SQLite library itself is now contained in a single file rather than being spread 
over several dozen files.  Some advantages of this change include better 
performance due to cross-file optimization, and also an easier compilation on 
platforms with more limited make systems.


The last DBD::SQLite release that doesn't use the amalgamated source is version 
1.23, which was released 2 days earlier.


Also the bundled SQLite library with both 1.23 and 1.24_01 has been updated to 
v3.6.13 from v3.6.12 that 1.20 had.


Further improvements in 1.24_01 over 1.20 involve mainly a significant 
modernization of the whole test suite, so it uses Test::More, and also there 
were more bugs fixed, minor enhancements made, and RT items addressed.


See http://cpansearch.perl.org/src/ADAMK/DBD-SQLite-1.24_01/Changes as well as
http://sqlite.org/changes.html for details.

Given that the switch to amalgamated SQLite sources is arguably a very large 
change (or arguably a very small change), mainly in subtle ways that might 
affect build/compile systems (though actual SQLite semantics should be 
identical), ...


Please bash the hell out of the latest DBD::SQLite and report any outstanding
bugs on RT.  Test your dependent or compatible projects with it, which includes
any DBMS-wrapping or object persistence modules, and applications.

If you want in to DBD::SQLite development, then join the following email/IRC
forums which MST created (the mailing list, I am administrating):

  http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite

  #dbd-sqlite on irc.perl.org

And the canonical version control is at:

  http://svn.ali.as/cpan/trunk/DBD-SQLite/

Patches welcome.  Ideas welcome.  Testing welcome.  Whining to /dev/null.

Note that today's switch to amalgamated sources is the last major short term 
change to DBD::SQLite that I personally expected would happen (sans updates to 
the bundled SQLite library itself), but other developers probably have their own 
ideas for what directions the development will go next.


Please do not reply to me directly with your responses.  Instead send them to
the forums or file with RT as is appropriate.

Thank you. -- Darren Duncan



[Fwd: [ANNOUNCE] PostgreSQL 8.4 Beta Released]

2009-04-15 Thread Darren Duncan
Great news!  Lots of nice PostgreSQL features, sure to greatly assist database 
using applications and toolkits the world over.  Also a new DBD::pg 2.13.0 was 
just released the other day, so presumably things are all compatible at the Perl 
end; though you may not need that DBD upgrade to use this. -- Darren Duncan


 Original Message 
Subject: [ANNOUNCE] PostgreSQL 8.4 Beta Released
Date: Wed, 15 Apr 2009 13:41:08 -0300 (ADT)
From: Marc G. Fournier scra...@postgresql.org
To: pgsql-annou...@postgresql.org


The first beta of Version 8.4 of the world's most advanced open source
database, PostgreSQL, was just released by the PostgreSQL Global
Development Group. After fourteen months of development, 8.4 is ready for
testing by PostgreSQL users all over the world in order to make it our
best release ever.

8.4 includes hundreds of patches and dozens of new features. Among them:

   * Windowing Functions
   * Common Table Expressions  Recursive Joins
   * Default  Variadic parameters for functions
   * Parallel Restore
   * Column Permissions
   * Per-database locale settings
   * Improved hash indexes
   * Improved join performance for EXISTS and NOT EXISTS queries
   * Easier-to-use Warm Standby
   * Free Space Map auto-tuning
   * Visibility Map (reduces vacuum overhead)
   * Version-aware psql (backslash commands)
   * Support SSL certs for user authentication
   * Per-function runtime statistics
   * Easy editing of functions in psql
   * New contrib modules: pg_stat_statements,
 auto_explain, citext, btree_gin

Please download and install version 8.4 and test out these new features as
soon as you can! You can especially help the PostgreSQL developers by
trying new features in combination, and by doing a test port of your
production applications to version 8.4. Performance comparisons with 8.3
are also very helpful. Tell us what you find!

Beta Information Page:
  http://www.postgresql.org/developer/beta

Release Notes:
  http://www.postgresql.org/docs/8.4/static/release-8-4.html

Source Code:
  http://www.postgresql.org/ftp/source/v8.4beta/

Windows Installers and Binaries: see the Beta Information Page

Translation teams are at this time asked to begin completing the message
translations for the 8.4 release. The PostgreSQL project always needs more
translators! If you are bilingual please consider joining a translation
team to help translate PostgreSQL:

Translation home page: http://babel.postgresql.org/

Translators mailing list: pgtranslation-translat...@pgfoundry.org

The usual duration of the beta testing period for a new version of
PostgreSQL is 6 weeks. However, that time can vary significantly depending
on what bugs our testing discovers. The PostgreSQL project does not
release until all significant known bugs are fixed.

---(end of broadcast)---
-To unsubscribe from this list, send an email to:

  pgsql-announce-unsubscr...@postgresql.org



Re: FYI - modern Muldis D code examples

2009-04-09 Thread Darren Duncan
; procedures are not), and how it represents 
nested distinct lexical scopes/blocks, or conceptually inline-defined closures 
etc, as well as how it supports internal recursion, as well as one way to keep 
the grammar/parsers/generators simpler.  So when you see inner routine, think 
inlined closure or nested block or pure section.  Now, you don't actually 
have to use inner routines; those items could be declared as full routines 
instead, but inners save you from polluting public namespaces, and really are 
the closest analogy to languages with actual inlining.


5.  Muldis D can inference types, but it still uses explicit type declarations 
in code partly to aid type checking of code in isolation as well aid in 
self-documentation.  Types like Relation and Tuple are actually quite broad 
and in practice you may often declare with explicit subset types of those.


6.  Muldis D code *is* declarative.  While routines might look sometimes like an 
ordered sequence of steps to follow, they really are just a description of what 
result is desired.  In particular, Muldis D is designed such that most code 
would be forced into pure sections, and so a DBMS is very much empowered to 
optimize it.


7.  As a general explanation for why the Muldis D examples looks the way they 
do ...

One of Muldis D's main features is that it is homoiconic, meaning that its code 
is also data, and you can introspect or alter code at runtime.  This manifests 
mainly with the special global variables called the system catalog, which serve 
the same function as SQL's INFORMATION_SCHEMA but that mine breaks down 
everything and doesn't just store routines etc as code strings.


Now I designed Muldis D starting at the AST level, meaning as it is represented 
in the system catalog, figuring out how to effectively represent all types and 
data and routines as an abstract syntax tree, which can also be the target 
format of parsers and the source format for code generators.


Using this syntax tree form as a starting point, I have then been creating a 
concrete Muldis D syntax, PTMD_Tiny that can map to the AST / system catalog 
more or less 1:1, and this concrete syntax is what the code examples are in.


So the concrete syntax is really just a layer of sugar over the AST.  Over time 
I will continue to add more sugar to make the concrete syntax sufficiently terse 
and easier to use.


Meanwhile, what you see is how far along that road of adding sugar I got to.

8.  Keep in mind that my example code is meant to be a more literal translation 
of what the SQL is saying.  If the SQL spelled out columns to return, so does my 
example.  If the SQL said return all columns, then mine doesn't spell them out.


Anyway, thank you for your time.

-- Darren Duncan

==

SQL:

SELECT 1

Muldis D Text:

function func1 (NNInt --) {
1;
}

or:

function func1 (Relation --) {
Relation:{ { attr1 = 1 } };
}

==

SQL:

SELECT NOW() AS time FROM dual AS duality

Muldis D Text:

procedure proc1 (Instant $time) {
fetch_current_instant( $time );
}

or:

procedure proc1 (Relation $r) {
main {
var Instant $now;
fetch_current_instant( $now );
inn.wrapup( r = $r, now = $now );
}
inner_updater wrapup (Relation $r, Instant $now) {
$r := Relation:{ { time = $now } };
}
}

==

SQL:

SELECT 1 FROM foo LEFT OUTER JOIN bar ON ( foo.col1 = bar.col2 )

Muldis D Text:

# Note: This example assumes neither of foo|bar have a col named colJ,
  and moreover that foo|bar have no other col names in common. #

procedure proc1 (Relation $result) {
main {
inn.query( result = $result, db = $fed.data.db1 );
}
inner_updater query (Relation $result, Database $db) {
$outer_join = outer_join_with_maybes(
primary = ($db.foo{col1=colJ}),
secondary = ($db.bar{col2=colJ}) );
$result := static_extension(
topic = ($outer_join{}),
attrs = Tuple:{ attr1 = 1 } );
}
}

or, if we want the result of the outer join itself:

procedure proc1 (Relation $result) {
main {
inn.query( result = $result, db = $fed.data.db1 );
}
inner_updater query (Relation $result, Database $db) {
$result := outer_join_with_maybes(
primary = ($db.foo{col1=colJ}),
secondary = ($db.bar{col2=colJ}) );
}
}

or, if we simply want to know if there were matching rows in foo and bar:

procedure proc1 (Bool $result) {
main {
inn.query( result = $result, db = $fed.data.db1 );
}
inner_updater query (Bool $result, Database $db) {
$result := is_not_empty(
(($db.foo{col1=colJ}) join ($db.bar{col1=colJ})) );
}
}

==

SQL:

SELECT * FROM foo WHERE

FYI - modern Muldis D code examples

2009-04-08 Thread Darren Duncan

Hello,

If any of you have been confused in regards to understanding my periodically
touted Muldis D language, which is probably nearly everyone, I've got something
here today that may make understanding it a lot easier: some realistic example
code, of which I've got about two dozen examples translated from SQL.

To recap, Muldis D is my new programming language part of whose purpose is to
eventually succeed SQL as the query+DDL language of choice for relational DBMSs,
in the same manner that Perl 6 is intended to eventually supplant Perl 5.   (And
yes, I am being pragmatic and realize it won't happen overnight.)  The design of
Muldis D is like a cross between SQL and Perl 6 and it should be easy to learn.

Muldis D is also intended for use today as a source of design ideas for building
toolkits that interface between a DBMS and an application, particularly ones
that want to go beyond current solutions and offer portable parsing and
generation of SQL stored routines and schemas including such, and Muldis D can
be a primary inspiration for the design of intermediate representations of both
SQL data and code that are widely portable.  Muldis D is rigorous enough that
one could just implement it literally (which I intend to do separately), but you
don't actually have to understand even most of it to glean useful ideas from it
and save yourself from reinventing the wheel in inferior ways.

For context, http://search.cpan.org/dist/Muldis-D/ is where I've published the
rigorous definition of the language, which you can use as a reference later.  I
consider the language spec to be maybe 90% done, and the single largest
remaining thing to rigorously define is its concrete syntax for everything
except value literals, which are done.

So most of this email consists of a number of SQL examples taken from the new
manual for the SQL::Abstract 2.0 Perl module, and for each a translation into
concrete PTMD_Tiny dialect Muldis D code (currently going beyond the scope of
what is rigorously defined).  For the original SQL examples, go to
http://github.com/ashb/sql-abstract/tree/0f93e5f7e64a55f293efd70c2769d1fdbe22da38/lib/SQL/Abstract/Manual 


and look at Examples.pod.

As a simple point of explanation, at the top level all Muldis D code takes the
form of either a value literal or an invocation of a routine or a definition of
a routine.  So if you were using the Perl DBI module to perform a query or run
transient DML code, passed to its prepare() method, what you pass would be
typically a procedure definition, and DBI's bind parameters for the query would
bind to the declared parameters of that procedure, which are typically named
(rather than positional question marks), like some SQL DBMSs spell out :param or
@param etc.  Another detail is that for a routine representing a SELECT, the
result comes back via a subject-to-update/INOUT parameter; it does not come back
out of band, and so is more like how a normal programming language works.

Now this syntax isn't fully complete, but I figure it is maybe 80% like what the
final version would be.

Also let me emphasize that these examples are in canonical character string
form, same as SQL code.  While the Muldis D spec also specifies a version in
terms of Perl structures ala SQL::Abstract et al, that isn't illustrated here;
however that version would closely resemble a concrete syntax tree from parsing
the version below.  The true AST of Muldis D, which is fully (or 95+%) specced,
is its system catalog, analogous to the SQL INFORMATION_SCHEMA but that all code
is decomposed, including routine and view etc definitions.

That said, the canonical Muldis D code intentionally has very simple grammar, so
it should be easy to parse or generate; it should also resemble its lower level
AST form enough that details like named expression nodes and the separation of
pure from impure code shows up often.

-- Darren Duncan

==

SQL:

SELECT 1

Muldis D Text:

function func1 (NNInt --) {
main {
1;
}
}

or:

function func1 (Relation --) {
main {
Relation:{ { attr1 = 1 } };
}
}

==

SQL:

SELECT NOW() AS time FROM dual AS duality

Muldis D Text:

procedure proc1 (Instant $time) {
main {
fetch_current_instant( target = $time );
}
}

or:

procedure proc1 (Relation $r) {
main {
var Instant $now;
fetch_current_instant( target = $now );
inn.wrapup( r = $r, now = $now );
}
inner_updater wrapup (Relation $r, Instant $now) {
$r := Relation:{ { time = $now } };
}
}

==

SQL:

SELECT 1 FROM foo LEFT OUTER JOIN bar ON ( foo.col1 = bar.col2 )

Muldis D Text:

# Note: This example assumes neither of foo|bar have col named colJ. #

procedure proc1 (Relation $result) {
main {
inn.query( result = $result, db = $fed.data.db1

ANN - DBD::SQLite version 1.20

2009-04-07 Thread Darren Duncan

All,

I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a DBI Driver) 
version 1.20 has been released on CPAN.


  http://search.cpan.org/dist/DBD-SQLite/

This follows on the heels of 10 developer releases released starting 2009 March 
27th (Adam Alias Kennedy has been doing release management).  The previous 
production release of DBD::SQLite was version 1.14 about 18 months ago.


Improvements in 1.20 over 1.14 include:

  * Updated the bundled SQLite library from v3.4.2 to v3.6.12, which carries 
many new features as well as bug fixes.

  * Added support for user-defined collations.
  * Added -column_info().
  * Resolved all but a handful of the 60+ RT items.
  * Many bug fixes and minor enhancements.
  * Added more tests, large refactoring of tests.
  * Minimum dependencies are now Perl 5.006 and DBI 1.57.

See http://cpansearch.perl.org/src/ADAMK/DBD-SQLite-1.20/Changes as well as 
http://sqlite.org/changes.html for details.


Now it is especially important, since automatic updates from CPAN such as with 
the CPAN/CPANPLUS utilities, would now be pulling this new 1.20 by default, ...


Please bash the hell out of the latest DBD::SQLite and report any outstanding
bugs on RT.  Test your dependent or compatible projects with it, which includes
any DBMS-wrapping or object persistence modules, and applications.

If you want in to DBD::SQLite development, then join the following email/IRC
forums which MST created (the mailing list, I am administrating):

  http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite

  #dbd-sqlite on irc.perl.org

And the canonical version control is at:

  http://svn.ali.as/cpan/trunk/DBD-SQLite/

Patches welcome.  Ideas welcome.  Testing welcome.  Whining to /dev/null.

Regarding near future plans:  Now, the current 1.20 uses the pristine 
several-dozen SQLite library source files, same as 1.14 did.  While reality may 
be different, I believe that the next major planned change to DBD::SQLite is to 
substitute in the amalgamation version, which combines all the SQLite source 
files into a single file; the amalgamation is the recommended form for users 
according to the SQLite core developers.  See http://sqlite.org/download.html 
for a description of that.  Meanwhile there should be another stable release 
with any bug fixes for 1.20 to come out first.  Any other major changes or 
features for DBD::SQLite are expected to come out separately from and after the 
stabilized switch to the amalgamation sources.


Please do not reply to me directly with your responses.  Instead send them to
the forums or file with RT as is appropriate.

Thank you. -- Darren Duncan



[Fwd: [ANNOUNCE] == Postgres Weekly News - April 01 2009 ==]

2009-04-01 Thread Darren Duncan
This is not Perl specific, but probably something any current or possible users 
of Postgres should know as they plan their futures. -- Darren Duncan


 Original Message 
Subject: [ANNOUNCE] == Postgres Weekly News - April 01 2009 ==
Date: Wed, 1 Apr 2009 01:44:32 -0700
From: David Fetter da...@fetter.org
To: PostgreSQL Announce pgsql-annou...@postgresql.org

== Postgres Weekly News - April 01 2009 ==

Following MySQL's lead as usual, the PostgreSQL project is dividing
into several lines of development:

- Shizzle: High-performance and Feature-Free

- MaryMary: Compiled with libhaltingproblem

- Narcona: Painless installation and setup

- OurThing: Lots of sources, based in Sicily

- XPostgres: Everybody who's ever worked on Postgres code, back to UC
  Berkeley and Illustra.

- Moon/PostgreSQL: Corporate support, as long as it lasts.

The PostgreSQL Global Development Group will be shutting down the
mailing lists as of today.  Future communication for the project will
be through Twitter, the bug tracker, and the Bitkeeper source code
management system.



ANN - DBD::SQLite version

2009-03-27 Thread Darren Duncan

All,

I am pleased to announce that DBD::SQLite (Self Contained SQLite RDBMS in a DBI 
Driver) version 1.19_01 has been released on CPAN.


  http://search.cpan.org/~adamk/DBD-SQLite-1.19_01/

This is the first CPAN release of DBD::SQLite since version 1.14 about 18 months 
ago.  This is the change summary since 1.14:


  1.19_01 Fri 27 Mar 2009
- Updated to SQLite 3.6.10, and bumped up the version
  requirement for installed sqlite3 to 3.6.0 as 3.6.x
  has backward incompatiblity (ISHIGAKI)
- fixed closing dbh with active statement handles issue
  with a patch by TOKUHIROM. (ISHIGAKI)
- skip 70schemachange test for Windows users. (ISHIGAKI)
- applied RT patches including #29497, #32723, #30558,
  #34408, #36467, #37215, #41047. (ISHIGAKI)
- added TODO to show which issues are to be fixed. (ISHIGAKI)
- license and configure_requires in Makefile.PL and META.yml (Alexandr 
Ciornii)
- Spelling check for SQLite.pm (Alexandr Ciornii)
- Adding arbitrary Perl 5.005 minimum

Right now, DBD::SQLite has a new development team with Matt Sergeant's blessing, 
 which is working to keep it updated and fix any outstanding bugs.  Multiple 
people have made commits to it since Jan 24th.  I am serving a role as project 
advocate among other things.


So please bash the hell out of the latest DBD::SQLite and report any outstanding 
bugs on RT.  Test your dependent or compatible projects with it, which includes 
any DBMS-wrapping or object persistence modules, and applications.


And yes we are aware that 3.6.10 isn't the latest; that will be fixed soon.

If you want in to DBD::SQLite development, then join the following email/IRC 
forums which MST created (the mailing list, I am administrating):


  http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite

  #dbd-sqlite on irc.perl.org

Some discussion has also taken place in the dbi-dev list and there is also a 
general DBI related IRC channel, but the above DBD-SQLite forums were just 
created last night.


The version control is the one Adam Alias Kennedy setup around January 24th, 
which is a Subversion repo.  Here are some change log and browse urls:


  http://fisheye2.atlassian.com/changelog/cpan/trunk/DBD-SQLite

  http://svn.ali.as/cpan/trunk/DBD-SQLite/

Patches welcome.  Ideas welcome.  Testing welcome.  Whining to /dev/null.

In particular, we can use more people with C savvy, as we are somewhat bereft of 
that among the current team.  For one thing, apparently using the amalgamation 
file from sqlite.org is incompatible with the XS code that talks to the 
multiplicity of original SQLite source code files, so some savvy is needed to 
patch it for that migration.


Please do not reply to me directly with your responses.  Instead send them to 
the forums or file with RT as is appropriate.


Thank you. -- Darren Duncan


ANNOUNCE - Set::Relation versions 0.8.0 and 0.9.0 for Perl 5

2009-03-25 Thread Darren Duncan

All,

I am pleased to announce that Set::Relation (relational types and operators for
Perl) versions 0.8.0 and 0.9.0 for Perl 5 have been released on CPAN.

  http://search.cpan.org/dist/Set-Relation/

First, to briefly repeat the change information for last month's release 0.7.0
(from 0.6.0 which was the first initial-feature-complete release):
Set::Relation is now a role (or actually 2 roles, one immutable, one mutable),
so it is easier for user code to just say they accept any objects doing that
role, and the initial implementation was separated into another file called V1.
 It is now also easier for third parties to make distinct implementations if
they wish.  And yes, this role is in terms of an actual Moose role.

Now the main change in version 0.8.0 from version 0.7.0 is the addition of a
second bundled implementation of the Set::Relation role, called V2.
Feature-wise, V2 should have considerably better performance than V1 in general,
and especially for some use cases.  V2 also does strictly immutable objects
while V1 does mutable objects.  Design-wise, both V1 and V2 are similar in many
respects, including the use of a Perl Hash internally to represent each relation
tuple.  How they differ mainly is that V1 does eager tuple hashing and duplicate
elimination while V2 does lazy tuple hashing and duplicate elimination.  Another
feature change in 0.8.0 from 0.7.0 is that users have the option on some
Set::Relation method calls to say they accept less accuracy in results, so for
example they give permission for duplicate elimination to be skipped entirely,
and what they gain from this conceptually is an even greater speed increase.
Now the kind of operations where you would likely see the greatest speed
increase with V2 automatically, are for [new(), union(), insertion()] as they
are done with zero hashing or elimination; and to a lesser extent join() etc
since it only indexes the common attributes on the join; and optionally on
cardinality() and members extraction, as they also avoid all hashing/etc when
explicitly asked to, thereby the class sort-of doing multisets rather than sets
as normal.  In contrast with V2, operations like [intersection(), difference(),
deletion()] still incur indexing / duplicate elimination in order to make sure
all copies of a filtered-out tuple are eliminated.  Version 0.8.0 does not
introduce any major API changes or capability features.  Note that I recommend
V2 over V1 for actual use, where either would be useful; I don't consider them
on equal footing.

And the main change in version 0.9.0 from version 0.8.0 is the addition of
support for specifying candidate keys or (unique) key constraints on
Set::Relation objects at object creation time, and introspecting for candidate
keys on the objects after creation.  Version 0.9.0 was just a first draft of the
'keys' feature and it isn't very well integrated.  What is stated above is about
as far as it goes.  Still TODO is to actually exploit knowledge of keys to speed
up the relational operators, since you can do indexing or duplicate elimination
much less expensively if you know you only have to examine a subset of the
attributes' values.

On a related matter, today I updated
http://www.perlfoundation.org/perl5/index.cgi?gsoc_2009_projects to add a
student project idea to implement Perl 6 savvy relational types and operators,
starting with porting Set::Relation to Perl 6.  Now said port was something I
otherwise would have done within the next few months, and the Perl 5 module was
designed from day one that this would happen in mind, but I figured this was
something it would be nice to have a fresh outlook on, since someone else say a
student may come up with the functionality in much more idiomatic Perl 6 than I
would have.  And its not like any prerequisites they might need from me aren't
already done now; the existing Perl 5 version already illustrates the desired
features and behavior, which they are of course free to improve on.  Now in the
likely event that no student picks this up, I welcome input from you / the
community on how to do better than a straight port and provide better Perl 6
savvyness.

Thank you and have a good day. -- Darren Duncan




ANNOUNCE - Set::Relation version 0.6.0 for Perl 5

2009-02-10 Thread Darren Duncan
 language itself to adapt distinct Set::Relation
features into Perl 6 itself as if a relation were just another generic
collection type (which it is)?

Note that the work done on Set::Relation and in improving it and testing it will
later feed back into implementing Muldis::Rosetta, whose design overlaps.  It is
very helpful to me if Set::Relation can be made the best it can be, as soon as
possible, so to make said feedback more timely.

Thank you and have a good day. -- Darren Duncan





ANN - SQLite 3.6.8 adds nested transactions

2009-01-12 Thread Darren Duncan

Good news everyone!

The SQLite DBMS, as of the new version 3.6.8, now has nested transaction 
support.

See http://www.sqlite.org/releaselog/3_6_8.html for the release announcement, 
and http://www.sqlite.org/lang_savepoint.html for documentation of the feature.


As far as I was concerned, and AFAIK had argued in the past, that was the single 
most important piece of missing functionality.  Something whose presence can 
make reliable database development an order of magnitude easier.  Unlike some 
add-on features which could be done in wrappers, nested transactions was 
definitely something that was only appropriate to be implemented in the same low 
level as normal transactions in the DBMS.  Its about code being able to declare 
and have enforced at any level of granularity that a set of operations is 
atomic, as a transaction, without worrying about what calling code is doing with 
transactions, and making it easier to do error handling and retry/abort etc.


I'm letting you know about this update today as encouragement to make use of the 
feature in any of your applications or DBMS toolkits that do or want to support 
SQLite.  It is now also easier to implement TTM's concept of a D language 
(which cares about reliability and ease of use) over SQLite.


Note that I'm promoting this under the assumption the new feature works as 
advertised and is efficient; if not, presumably any deficiencies will be soon 
fixed; I will be testing it myself before too long.


Similarly, you should exploit any native nested transaction support of any other 
DBMS you are using.


-- Darren Duncan


RFC: trimming my Muldis D talk for OSCON, PGDay next week

2008-07-16 Thread Darren Duncan

All,

I've just nearly-finished writing my first talk for an international 
conference, a double-header of OSCON and PGDay, and I am now seeking your 
advance feedback to make the talk better.  Especially in how to best trim 
it to fit in the 45 minutes of time I have.  And within the next 2 days or 
so is important; sorry for the rush.


The talk is titled Muldis D - Portable Databases At Full Power, and it is 
slated to be given twice within a few days, once on July 20th at PGDay, and 
again on July 23rd at OSCON.


The purpose of this talk is to introduce my new database programming 
language, Muldis D, in such a way that attendees would become interested in 
it and want to learn more, such as by looking at the official spec 
published on CPAN, and then ideally so they would want to try using it.


In a nutshell, Muldis D is effectively a generalization of all the dialects 
of SQL out there, as well as a modernization of SQL that isn't hobbled by 
ill-conceived or out-dated aspects and features.  Muldis D is intended to

serve as an intermediate language for translating any kind of SQL code from
one dialect to another, or to/from non-SQL languages, while exactly 
preserving all of its semantics.  Muldis D is intended to be used as a 
toolkit for building better database wrappers, persistence tools, 
abstraction frameworks, migration tools, ORMs, etc, in any general purpose 
language.


You can see a copy of the talk now at 
http://www.muldis.com/slideshows/Muldis_D_PDAFP_200807.xul .


This is a single text file that contains all the slides, and it is designed 
to run as a slideshow in a Firefox browser (click anywhere on the page to 
advance next, or you can navigate using a hiding slider at the top).  If 
you don't have Firefox or want to see the source, well it is plain text 
with essentially no markup; you can view it in a text editor.


I would greatly appreciate it if several of you could walk through the 
slideshow and give me feedback on it.  The talk and the slideshow are 
essentially one and the same.


Mainly I want to know how I can best chop it down in size so it will fit, 
along with audience questions, in the 45 minutes that I have to give it.


Currently from my test readings I estimate that it would take about 50-60
minutes to simply present all of it, never mind extra time taken up by 
audience questions.  Also about the last 10%, introducing my reference 
implementation, is still to be done, though the outline and main points for 
that is included in the slides.  So this means I probably have to chop 
about half of what is there now to make it fit, targeting about 30-35 
minutes without questions.


So the kind of feedback I'd like includes what portions you think are the 
most expendable, as well as what portions seem the best and that should be 
kept.  I want to keep what will get people interested to learn more and try 
it out, so your own opinions of what details were the most/least attractive 
will be very helpful to me.


Also, if you can make suggestions on editing my text so it is communicating 
the same ideas using fewer words, less is more, that is appreciated too.


Also, I also need to know if you think I omitted any important things; what 
should I be adding?


If you are inclined, you can actually edit the file to conform how you 
think it would be better, and send me your copy; I can diff it against mine 
to see what you changed.


As I get feedback, I'll regularly be updating the live copy at 
http://www.muldis.com/slideshows/Muldis_D_PDAFP_200807.xul so if people 
regularly look back there, they don't have to worry about suggesting 
changes that were already made.


Similarly, this talk is in version control at 
http://utsl.gen.nz/gitweb/?p=Muldis-D so you can easily see *what* changes 
I'm making to it over time, at least every few hours.  (Note that the 
server may occasionally display error 500, but that should probably go away 
in a few minutes or an hour when you can try again.)


While email is one main way to submit feedback, I also plan to lurk on some 
semi-related IRC channels for the next few days where I can take feedback 
also as discussion (I don't have my own channel yet); if this applies to 
you, then you probably already hang out there, or you can ask me where.


Thank you in advance for any quick help; it is greatly appreciated.

-- Darren Duncan



help wanted - example database schemas and queries

2007-10-21 Thread Darren Duncan

All,

At this point in time, the design work on Muldis D + Muldis DB is 
transitioning to be implementation driven rather than theoretical 
scenario driven.  That is, the fundamental design work is more or 
less done, and I'm now moving on to filling in cracks in the design 
that would get revealed by actually trying to write code that uses 
Muldis DB and implement the latter so that the former runs.


For context, you can see 
http://search.cpan.org/dist/Language-MuldisD/ and 
http://search.cpan.org/dist/Muldis-DB/ , though the versions on CPAN 
aren't the absolute newest versions.


While I am great in thinking of things in more abstract or generic 
terms, it is great drudgery for me to try to come up with specific 
examples of use, such as actual tasks being solved, actual database 
schemas, reasonable sample data, and actual queries to run.


I am now requesting the help of any interested parties to generate 
and/or collect database use case examples that I would then translate 
into Perl Hosted Muldis D code, thereby demonstrating how one would 
use Muldis DB to implement solutions to the use cases.


Please provide your examples with as much detail as reasonably 
possible, since I will basically be translating them literally, and I 
can't really fill in blanks; so too little detail means a less 
interesting and/or non-instructive result.


For example, specify the components, relationships, and constraints 
or business rules for data that you want a database schema to 
implement, as well as some specific queries or data manipulation 
tasks you would want to do with it.


If you want, you can specify these in the form of SQL DDL and 
DML+SELECT statements.


Or you can use pseudo-code of some other well known programming language.

Bonus points for providing some example (small) data sets for inputs 
as well as describing the expected outputs.


Note that generic question/examples like just how do I join 2 
rowsets or how do I filter records or how do I do subqueries are 
not very helpful since the actual way to code this can change 
depending on the reason you want to do this and the structure of the 
tables.


Keep in mind that Muldis DB is better considered, in the context of 
it being used as a wrapper over DBI et al, as a relational database 
abstraction layer, and not as a Perl object persistence layer.  So 
please format the use cases in consideration of this paradigm.


Please send them directly to [EMAIL PROTECTED], soon as you 
can, and even partial is better than none.


I will collect the input and make the initial version of a Muldis DB 
cookbook from this source material plus the implementing code in 
Muldis DB, as well as use them as examples for live presentations I 
give (the first being on the 23rd).


I also need to make a test suite sooner rather than later, and these 
examples could also help with that to some extent.


Thank you in advance. -- Darren Duncan


RE: Please take a moment...

2007-08-06 Thread Darren Duncan

At 5:55 AM -0500 8/6/07, Pat Sheehan wrote:

The link wont connect ?

-Original Message-

Give back a little of the time that perl has saved you:

http://perlsurvey.org/survey/


The link worked when I tried it just now.

Also, I filled out the survey on July 29th, and it is well worth doing.

-- Darren Duncan


most common kinds of cross-database queries

2007-08-01 Thread Darren Duncan

At 9:03 AM +1000 8/2/07, Daniel Kasak wrote:

Nice suggestion ( we do this from Excel sometimes ), but I'm actually
planning on *competing* with Access, and not using it to do the heavy
lifting for me. ie this is for a generic query engine, and not for a
specific task I have in mind.


As a tangent to that other cross-database thread, I'm hoping for some 
input that can help me determine priorities in my own implementation 
under Muldis DB.


What do you think would be the most common scenarios of a cross-database query?

Or why would data be in multiple databases, and what kinds of ways 
are most likely for it to be brought together in a common query?


For example, is it more common for the multiple databases with the 
same schema but different data, such as to implement partitioning or 
clusters, or are they all full replication for redundancy and 
performance, or do they tend to be all different from each other and 
just associate at their edges?


For example, what relational operations tend to be the most common 
between databases, eg: unions, joins to display data from both, using 
one to filter data from the other.


In common scenarios, is usually the same DBMS product normally used 
for all the databases, or are they more likely to be different 
products driving each one?


For those replying, please just answer the question in a generic 
sense for now, without regard for particulars of my project.


Thank you. -- Darren Duncan


Re: Cross-Database query engine?

2007-07-24 Thread Darren Duncan

At 9:43 AM +1000 7/25/07, Daniel Kasak wrote:

We've been stuck for quite a while between SQL Server and MySQL, and
while doing cross-database queries is drop-dead simple in MS Access,
it's relatively painful in Perl.

Are there any solutions at present for running queries across different
database servers? If not, I may well write one ...


Please give more detail of what you actually want to do, perhaps with 
an example, so it is easier to answer the question. -- Darren Duncan


Re: Cross-Database query engine?

2007-07-24 Thread Darren Duncan

At 10:44 PM -0500 7/24/07, David Nicol wrote:

So you're saying that Access abstracts the handles to the multiple databases
in such a way that they appear to you as a single database, and you 
can use the

tools the way you would use on a single database on the combination of the
multiple databases.


If that's the case, making several dbs look like one, then some DBMS 
can already do that internally.  At the very least, Oracle, MySQL, 
and SQLite can all do that, afaik.


Since MySQL is what you say you want to use, I recommend looking into 
the Federated storage engine that MySQL offers, which is in the 
current 5.0.x series and later; tables of that engine are proxies for 
tables under some other server.  Not all SQL features are supported 
with them, but the ones that are may be enough.


  http://dev.mysql.com/tech-resources/articles/mysql-federated-storage.html

-- Darren Duncan


Re: Cross-Database query engine?

2007-07-24 Thread Darren Duncan

At 9:43 AM +1000 7/25/07, Daniel Kasak wrote:

Are there any solutions at present for running queries across different
database servers? If not, I may well write one ...


Another thing I'll say is that my Muldis::DB project, currently on 
CPAN, has what you're talking about as a standard feature-to-be, by 
way of its native paradigm that makes all data stores look like 
they're all in one query environment, which is analagous to Perl's 
tie mechanism making foreign variables look like native local 
variables. -- Darren Duncan


ANNOUNCE - Muldis::DB v0.0.0 released, in p5+p6

2007-06-21 Thread Darren Duncan
, but instead I see it more like Muldis::DB 
should provide a more solid skeleton on which to build the solutions 
to all your database ills than are any other solutions.  I will 
contribute to the solutions as best I can.


Several public email-based forums for Muldis::DB now exist (with 
public archives), all of which you can reach via 
http://mm.DarrenDuncan.net/mailman/listinfo (note that Mailman's 
monthly password reminder email is turned off).  There are currently 
3, for 'announce', 'devel', 'users', modelled after the main lists 
for DBI.


I invite you to join one or more of these lists (if you haven't 
already), to better facilitate discussion, support, and group 
development.


As of this version-zero release, Muldis-DB is officially in pre-alpha 
development status.  A lot of documentation and functionality is 
present, but a lot isn't.  What is mostly done is the higher level 
documentation plus an alpha-quality but fundamentally stable public 
API implementation.  What is mostly undone is the reference engine 
implementation, the test suite, and documentation of the API details. 
What is already present should be sufficient to begin study of 
Muldis-DB such that it can actually be put to use within the next few 
weeks or months as Muldis-DB is fleshed out.  Also, it should be 
possible now to start writing code that uses or extends it.


To conclude, I would be very greatful for any and all kinds of 
assistence with the Muldis::DB framework that you can provide, which 
includes building or sponsoring features and extensions.


Probably the very first kind of help that I could use the most is 
with writing cookbook-type examples of using Muldis::DB and/or FAQ 
documents.  This both helps people learn how to use it in an 
effective manner, and also helps flesh out deficiencies in 
Muldis::DB.  Both the how do I define this kind of database variety 
and how do I express this SQL in Muldis D variety are helpful. 
These examples would be released as a Muldis::DB::Cookbook 
distribution.


I also welcome general feedback, constructive criticism, suggestions, 
corrections, and questions.


Note that I prefer any responses to happen on-list, or that you sign 
up to and post to a muldis-db list (only list subscribers may post), 
so that that and any replies can be for the group's benefit, and to 
save me from repeating the same answers ad nauseum.  Unless the 
response is not suitable for public discourse, in which case, sure, 
go private email; ditto if you're not sure about appropriateness.


Thank you in advance.

-- Darren Duncan


Re: ANNOUNCE: DBI 1.55 RC3

2007-05-02 Thread Darren Duncan

At 10:32 PM +0100 5/1/07, Tim Bunce wrote:

You can download it from
http://homepage.mac.com/tim.bunce/.Public/perl/DBI-1.55-RC3.tar.gz


All DBI tests successful or skipped, under Perl 5.8.8 with no 
threads, GCC 4.0.1, under Mac OS X 10.4.9 PPC. -- Darren Duncan


Re: ANNOUNCE: Apache-Status-DBI-v1.0.0.tar.gz

2007-03-30 Thread Darren Duncan

At 12:14 PM +0100 3/30/07, Tim Bunce wrote:

I've changed the version from '1.0.1' to '1.01' as I'd rather avoid
three-part numbers if we're not using version.pm.
Also I needed to change the elsif to else in the mod_perl2 module
loading code to get it to work for my mod_perl1.


Tim, the best thing to do then is make it version 1.01, which is 
equal to 1.0.1.  That way, in the future when version.pm is 
ubiquitous and you want to switch back, it is seamless.  Or if that 
is too long, then use 1.001 instead, which is equivalent to 1.1.0. 
But using 1.01, equal to 1.10.0, is just wrong and should be avoided, 
especially for new projects that don't have prior releases in 1.xx 
format. -- Darren Duncan


Re: ANNOUNCE: DBI 1.54 RC5 - including cool new DBD::Gofer stateless proxy

2007-02-18 Thread Darren Duncan

At 11:06 PM + 2/18/07, Tim Bunce wrote:

On Sun, Feb 18, 2007 at 03:38:40PM +, Tim Bunce wrote:


 I've released an RC5 which uses $^X and moves the PERL5LIB setting to
 t/85gofer.t, along with some other more minor changes:

  http://homepage.mac.com/tim.bunce/.Public/perl/DBI-1.54-RC4.tar.gz


I'd greatly appreciate some feedback from Windows users. (That's all that's
holding up announcing a release candidate to dbi-users for wider testing.)

Tim.


I think you meant to spell that:

  http://homepage.mac.com/tim.bunce/.Public/perl/DBI-1.54-RC5.tar.gz

... as the other spelling just gets you RC4 again.

-- Darren Duncan


ANNOUNCE: first email forums for QDRDBMS are active

2007-01-21 Thread Darren Duncan

Hello,

This email is a follow-up to my post to this forum of 2 days ago of 
subject RFC: host for new m-l for a DBMS module.


See http://www.nntp.perl.org/group/perl.dbi.users/30734 for a 
web-archived copy of that message, for context.


Despite what I said in that email, I was able to self-host my mailing 
lists after all, so a semblence of the long term plan became the 
short-term reality.


If you go to http://mm.darrenduncan.net/mailman/listinfo , you can 
see the 3 mailing lists that I created for QDRDBMS.  You can view the 
web archive of existing messages, and subscribe to the list if you 
want to post your own, or get them in your inbox.  These are the 
lists:


--

 * [EMAIL PROTECTED]

This low-volume list is mainly for official announcements from the
QDRDBMS developers, though developers of QDRDBMS extensions can also
post their announcements here.  This is not a discussion list.

 * [EMAIL PROTECTED]

This list is for general discussion among people who are using QDRDBMS,
which is not concerned with the implementation of QDRDBMS itself.  This
is the best place to ask for basic help in getting QDRDBMS installed on
your machine or to make it do what you want.  You could also submit
feature requests or report perceived bugs here, if you don't want to
use CPAN's RT system.

 * [EMAIL PROTECTED]

This list is for discussion among people who are designing or
implementing the QDRDBMS core API (including QDRDBMS D language
design), or who are implementing QDRDBMS Engines, or who are writing
core documentation, tests, or examples.  It is not the place for
non-implementers to get help in using said.

--

(Note: While these lists are run using mailman, I did turn off the 
feature to send subscribers their password every month, in case that 
was the bane of many.)


If you are interested in QDRDBMS but don't know which list or lists 
are most applicable to you, then perhaps look at the similarly-named 
3 mailing lists for DBI or Bricolage or other projects, especially 
DBI; the above 3 are intended for analagous purposes.  Also like the 
DBI lists, any list member can post to any list, though non-members 
can not post to them.


Note that I anticipate the lists will have no messaging activity for 
the first few weeks, but that should start about when I make my first 
CPAN release of QDRDBMS, expected in about a month or so.


But I am telling you about the lists now, so that when someone wants 
to make a post, there would be a fair number of people in position to 
read it.


Thank you in advance for any interest you may have in QDRDBMS.

-- Darren Duncan


RFC: host for new m-l for a DBMS module

2007-01-18 Thread Darren Duncan

Hello,

This email concerns an unreleased new Perl DBMS 
module/framework/engine that has separate Perl 5 and Perl 6 versions; 
the Perl 5 version is named QDRDBMS, and serves as a prototype for 
the Perl 6 version, which will have a different name, though the Perl 
5 version is expected to be used soon in production for several 
years.  This is a rewritten successor to my Rosetta project, whose 
name won't be used anymore.


QDRDBMS has not yet been released on CPAN, and I don't intend to put 
it there until it is complete enough to support a simple working 
demo, though I hope that will happen some time in February. 
Meanwhile, its various pieces can be seen at 
http://darrenduncan.net/QDRDBMS/ as a work in progress.


QDRDBMS is a self-contained fully-featured RDBMS implemented in Perl, 
a fact  which is strongly emphasized.  (FYI, see also Genezzo on 
CPAN.)  But it has swappable engines (as DBI has swappable drivers), 
and with an appropriate engine substitution, the QDRDBMS API can 
alternately be used as a front for some other DBMS, such as what DBI 
and its wrappers do, and its API can be targeted by other wrappers. 
Or its query language / AST can used as an interchange language.


QDRDBMS is not in managed version control yet, though I do manually 
make daily archives ... I plan to actually move it into version 
control a few days before the first CPAN release, where it would be 
subsequently maintained.  Most likely, the version control system I 
will use is that which Rosetta is currently in, which is the 
utsl.gen.nz SVN server graciously provided by Sam Vilain.


My main reason for writing this email is that I would like to have a 
reliable email discussion list setup for developers/users of QDRDBMS 
prior to the first CPAN release, so I can refer to it in my 
documentation as a place where to get help or make suggestions, and 
so discussion among co-developers can be centered there rather than 
being divided up among a variety of other mailing lists.  This list 
will be specific to the Perl 5 version, not the Perl 6 one (though 
they would crossover at times).


I would prefer to use a developer-focused server as is common already 
in the Perl community, and not use a general purpose server like 
Yahoo or Google.  From what I have seen with existing database or 
Perl related lists, they tend to be either at perl.org, or at 
sourceforge, or at an owner-specific domain.  While in the long term 
I will probably use an owner-specific domain (likely named after 
whatever the Perl 6 version will be called), during the first several 
years I can't do that.


I would like suggestions to where I may be best to apply for a 
mailing list at an existing managed host.


Currently, a perl.org address looks the most attractive to me, as 
that is where lists about DBI and Perl 6 are, and they have a lot of 
other lists besides.  Since I have no intention of hosting my project 
at Sourceforge, I don't know if it possible to get a mailing list 
with them in isolation from a project, or maybe that isn't actually a 
problem with using them.


Ideally, people would be able to view an archive of all list posts on 
the web, some how, and that it would be easy enough to transfer all 
the historical posts, with original email headers to a new host later 
if desired.


Any feedback as to where or who I should best ask for a list host, or 
offers for said, are appreciated.


Thank you in advance.

-- Darren Duncan

P.S.  I'll also want to setup a designated IRC channel for the 
project also, though I anticipate that those will be trivial in 
comparison, and I can concern myself with it later.  At least both 
freenode and perl.org seem to be setup that you can go to any channel 
name and the channel will just exist while people are there.


Re: DBD::mysql 4.00 released!

2006-12-24 Thread Darren Duncan

At 6:36 PM -0500 12/24/06, Patrick Galbraith wrote:

Dear users and developers,
I'm pleased to announce the release of DBD::mysql 4.00! This is the 
long awaited version 4.00 release which is the result of stabilising 
the former 3.000N_n tree.


That's great to see, and I'll put it to work!

 As you'll notice, the version has been changed to conform to best 
practices for perl versioning from the 3.000N to 4.00. This means 
future versions would be 4.01, 4.02, etc.


On that note, I have a strong suggestion for before your next releases ...

While version numbers like X.YY were best practice for Perl modules 
many years ago, the current best practice is X.YYYZZZ, which is 
forwards-compatible with the more modern X.Y.Z scheme that Perl 5 
uses for itself today (5.008007) and since version 5.0, and that is 
the Perl 6 default for versions of all kinds of entities, and is the 
recommendation for Perl 5 modules, and is the same as a majority of 
other programming projects out there, AFAIK.


So while the difference is inconsequential for 4-point-oh, please 
increment either the 3rd or 6th digits from now on, releasing eg 
4.1.0/4.001 or 4.0.1/4.01 etc from now on.


Now is the *perfect* time for you to move along to that path.

-- Darren Duncan


Re: MySQL upgrade caused Perl coredumps...

2006-11-04 Thread Darren Duncan

At 10:35 PM -0600 11/3/06, NIPP, SCOTT V \(SBCSI\) wrote:

I recently upgraded MySQL to version 5.0.4.  After the upgrade,
several of my Perl scripts that access the MySQL database began to fail
with Memory fault coredump messages.  Anyone have any suggestions?
I'm hoping a simple DBI::MySQL upgrade will fix this issue.  Here is the
Perl information...


Before you go about with that, you should move up to a production 
release of MySQL, such as 5.0.2x ... MySQL 5.0.4 was an alpha or beta 
release, and is well over a year old already.  Unless you mistyped 
and meant to say 5.0.24.


Also, if you're using MySQL v4.1 or higher, you should be using the 
DBD::mysql v3.x series of drivers.  If nothing else, because the 2.9x 
series will fail to connect due to MySQL user password hashes having 
changed in length between 4.0 and 4.1, which affects database users 
created after the switch.


-- Darren Duncan


RFC: proposed QDRDBMS temporary module

2006-09-15 Thread Darren Duncan
 for defining stored procedures or triggers.

10.  Various other missing or unsupported or poorly supported things.

11.  It will require Perl 5.8.x or later, possibly 5.8.8+ to keep it simple.

12.  It will not come with a shell.

13.  All SQL identifiers will be generated as case-insensitive 
barewords, rather than quoted case-sensitive delimiters as Rosetta 
will.


14.  If a table is defined in the schema that includes attributes 
that are table as well, this will be implemented as a split 
one-to-many table, even if the underlying DBMS supports actual 
multi-valued attributes.


Now, the QDRDBMS module is intended to be temporary, providing 
functionality that I want to use now and that you may find useful 
now.  While it can be further evolved by myself or other interested 
parties that want to use it now, ultimately it will be deprecated in 
favor of Rosetta and Perl 6, and so may be deleted from CPAN when 
Rosetta is stable and can do everything it does.


(Of course, by then, Rosetta may not be named Rosetta anymore, but 
something else which is better for long term use; I do plan to rename 
Rosetta anyway.)


Regarding the RFC nature of this email, feel free to say whatever 
you want now, or feel free to wait until I actually release this. 
Or, perhaps the best piece of feedback I can get short term is what 
to name the module, knowing that it is meant to be temporary.  I 
thought quick and dirty would work well enough for the purpose, 
like it did for QDOS back in the '80s.


Thank you.

-- Darren Duncan


a new Rosetta snapshot is released (v0.722.0)

2006-03-20 Thread Darren Duncan

All,

This isn't a formal announcement, but I would still like to alert you 
to the newest Rosetta release v0.722.0 ( 
http://search.cpan.org/dist/Rosetta/ ).


My own conception of the project has changed a lot in the last couple 
months, and now a lot of the main documentation reflects this, so 
there is now something reasonably up to date for you to look at and 
get an idea where I'm going with this.


Mainly I draw your attention to the DESCRIPTION of Rosetta.pm ( 
http://search.cpan.org/dist/Rosetta/lib/Rosetta.pm ), and to the top 
half of Language.pod ( 
http://search.cpan.org/dist/Rosetta/lib/Rosetta/Language.pod ), 
specifically the parts that *don't* say older.  All of that was 
written since the previous release, and contains a lot of significant 
ideas.


At the same time, all of this is still (part of) the 10,000 mile 
view, which is good for setting a context to understand the details 
and code examples which are not yet present but will come later.


Also, the included new application shell.pl does run, though its 
functionality is bare bones; it will be fleshed out soon.


As usual, also pay attention to the public Subversion repositories, 
whose urls are in the README, as they can be several weeks newer than 
CPAN releases.


Enjoy!

-- Darren Duncan


ANNOUNCE: first post-rewrite Rosetta release (v0.720.0)

2006-02-02 Thread Darren Duncan

2006-02-01   Darren Duncan [EMAIL PROTECTED]
--

I am pleased to announce the first CPAN release of the second major 
code base (started on 2005-10) of the Rosetta database access 
framework, v0.720.0, which is available now in synchronized native 
Perl 5 and Perl 6 versions.


This is a complete rewrite, including very different detail designs, 
implementations, and documentations, though it still retains the same 
high level design and purpose.




The Perl 5 version is composed of these 2 distributions (more come later):

 * Rosetta-v0.720.0.tar.gz
 * Rosetta-Engine-Native-v0.1.0.tar.gz

These have Locale-KeyedText-v1.72.1.tar.gz (released at the same 
time) as an external dependency.


The Perl 6 versions of all 3 of the above items are bundled with 
Perl6-Pugs-6.2.11.tar.gz (released a half-day earlier) in its ext/ 
subdirectory.


The Perl 6 versions don't depend on anything outside the Perl6-Pugs 
distro that they live in.  But the Perl 5 versions also have external 
dependencies on Perl 5.8.1+ and these Perl 5 packages, which add 
features that Perl 6 and Pugs already have built-in: 'version', 
'only', 'Readonly', Class::Std, Class::Std::Utils, Scalar::Util, 
Test::More; the latter 2 are bundled with Perl 5.




Following is both a reintroduction to the remade Rosetta as it is and 
will soon be, and a summary of the main changes from before the 
rewrite (first major code base of 2002 thru 2005-09).


For various reasons such will be bared below, it should be more 
apparent than ever that Rosetta is not just another DBI wrapper and 
really stands out as something different than any existing tools on 
CPAN.


Note that many of these details aren't yet in Rosetta's own 
documentation (they will be later), so they are distinct to this 
email.


* Locale::KeyedText is officially not part of the Rosetta framework 
anymore, being a distinct external dependency instead of its 
localization component.


* Anything that was in the SQL::Routine name space has been renamed 
into the 'Rosetta' name space.


* Briefly comparing DBI to Rosetta, DBI provides users with database 
driver independence; Rosetta provides them with database language 
independence, which is a higher abstraction, but it should still work 
quickly.


* Rosetta is now officially a federated relational database of its 
own that just happens to be good with cross-database-manager 
portability issues, and be good as a toolkit on which to build ORMs 
and persistence tools, rather than being mainly about portable SQL 
generation.


* The native query and schema design language of Rosetta is now based 
mainly on Tutorial D (by Christopher J. Date and Hugh Darwen) and 
closely resembles relational algrebra, rather than being based on SQL 
as it was before (note that some current documentation suggests 
otherwise, but that will be rewritten).


* Note, see http://www.oreilly.com/catalog/databaseid/ , the book by 
Date named Database in Depth, which is one of the best references 
on database design I have ever seen.  Everyone who works with 
databases should read it.  Its not dry and has practical stuff you 
can apply right now.  I am.


* The native language of Rosetta is presently called Intermediate 
Relational Language (IRL, pronounced earl, or girl without the 
g); it is inspired by Pugs' PIL, which serves a similar purpose 
for Perl 6 as what IRL does for Tutorial D and SQL and other 
languages.


* IRL is strongly typed, where every value and container is of a 
single type, and permits user data type definitions to be arbitrarily 
complex (such as temporal and spacial data) but non-recursive.  Aside 
from forbidding references, it includes the features of so-called 
object-relational databases which are actually part of the true 
plain relational data model.  Values of each distinct data type can 
not be substituted as operator arguments for others, or stored in 
containers for others, but they can be explicitly cross-converted in 
some circumstances (eg num to str or str to num).


* Despite actually being strongly typed, IRL has facilities to 
simulate weak data types over strong ones; for example, you can 
define an SV type that has numerical and character string components. 
More broadly speaking, you can define multi-part disjunctive types, 
each of a different other type, where only one member has a 
significant value at once, and the others have their type's concept 
of an empty value; actually, these have a single extra member that 
says which of the others holds the significant value.


* IRL natively uses 2-valued-logic (2VL) like Tutorial D, and not 
3-valued-logic (3VL) like SQL, so every boolean valued expression 
always evaluates to true or false, not true or false or unknown (a 
SQL NULL).  But it does simulate 3-valued-logic using disjunctive 
data types, one of whose members is the system defined Unknown 
strong data type, which can only ever hold

Re: Column names have spaces in them!!!

2006-02-01 Thread Darren Duncan

At 9:44 AM +0100 2/1/06, Alexander Foken wrote:
You could slowly migrate your system, a first step would be to make 
sure all table and column names match /^[A-Za-z][A-Za-z0-9_]+$/. 
Next, move the tables and the business logic to a real database. 
Then finally, get rid of ODBC drivers and Access on the clients and 
switch to a web frontend using a native driver like DBD::Oracle or 
DBD::Pg.


Any real database can use all of the same identifiers that Access 
can, and in fact you can use any character at all in one.  You just 
bound them in double-quotes everywhere they are referenced, at least 
with databases conforming to the SQL standard of delimited 
identifiers; some products may use alternate delimiters.  Chopping 
out the spaces and stuff is only crippling yourself. -- Darren Duncan


Re: Column names have spaces in them!!!

2006-02-01 Thread Darren Duncan

At 9:30 AM +1100 2/2/06, Ron Savage wrote:

On Wed, 01 Feb 2006 13:36:41 +0100, Alexander Foken wrote:


 Right. But using a restrictive set of characters for table and
 column names makes things easier. The column for the number of


Just as I raved about back in 2003 :-):


Now, I understand the arguments for limiting names to certain 
characters that don't need delimiting, and that's fine for some 
contexts.  But I also wanted to make clear that there are some 
contexts where such limiting is counter productive, and so it is good 
for database engines to be flexible, regardless of how users decide 
to design databases.


I considered it important to be able to support spaces and other 
arbitrary characters, because I see this support as making things 
easier to use from a non-programming user's point of view.


Say we're making a consumer database application that is of the 
point-and-click GUI variety and has little to no structural 
abstraction (such as with Filemaker Pro or MS Access or perhaps 
Oracle Forms), where users can define their own tables and columns 
and such in the same ways they can define category names or folder or 
file names, where they just type the name into a text input box. 
These users would never see anything resembling programming code, 
such as SQL, but just know they're making a database.


It would be quite natural for such users to make identifiers like 
'Person' and 'Home Address' and 'Home Telephone' and 'Work Telephone' 
and so on; it isn't natural for them to say 'Home_Telephone' and such.


So if we're making this consumer application and it is implemented on 
top of a generic SQL database, and assuming there is very little 
abstraction involved, we would probably be generating SQL code that 
lines up the names the users chose with identical table and column 
and such names, including the spaces.  There would be no translation 
necessary since the identifiers in the database are the same as what 
the users see.


Sure, we could impose naming restrictions on the users, so we can 
generate SQL without delimited identifiers, but then that would be 
making things more difficult for the users to make it easier for 
ourselves.  And it isn't even that much extra work to support this, 
or it may be less work.


From users' point of view, I see allowing this flexability to be like 
allowing them to name their file system files anything they want, and 
often they like to put spaces in file names.  The users aren't 
creating programming code, there creating boxes to put their data in, 
conceptually not far off from folders and files.


I'm also not suggesting that identifiers have to be arbitrarily long, 
like whole sentences; rather, just about the same lengths as are 
valid for folder or file names; just they should be allowed to 
contain spaces and such.


-- Darren Duncan


Re: Future versions of DBI to require perl = 5.8

2006-01-27 Thread Darren Duncan

At 12:03 PM + 1/25/06, Tim Bunce wrote:

FYI I'm planning on making the next release (1.51) be the last that
officially supports perl 5.6.

This is partly to make it easier to implement changes in future
releases that improve performance with threaded perls. This will benefit
ActiveState perl users, people using DBI with mod_perl2, and users
O/S distributions that ship perl pre-built with threads enabled.

Tim.


I have no problem with that (and in fact suggested so a year ago).

Moreover, I suggest you go a bit further and say that 5.8.1/5.008001 
is the minimum version, rather than 5.8.0; no one should actually be 
using 5.8.0 given all the bugs it has, and the 5.8.1 delta fixed more 
than any subsequent release, I think.


Moreover, with this move, you can count on Perl supporting Unicode 4, 
rather than just 3.2 (in 5.8.0).


As far as how this affects people using system perl, I can speak at 
least for Mac OS X that the first version bundling 5.8.x, Panther, 
bundled 5.8.1 (rc3), so something requiring 5.8.1 will agree to run 
on it.


I currently use 5.8.1 as the declared minimum in my modules.

-- Darren Duncan


Re: Future versions of DBI to require perl = 5.8

2006-01-27 Thread Darren Duncan

At 10:51 AM +1100 1/28/06, Ron Savage wrote:

On Fri, 27 Jan 2006 14:20:31 -0800, Darren Duncan wrote:
  Moreover, I suggest you go a bit further and say that

 5.8.1/5.008001 is the minimum version, rather than 5.8.0; no one
 should actually be using 5.8.0 given all the bugs it has, and the
 5.8.1 delta fixed more than any subsequent release, I think.


This is a bit trickier than it seems at first. Where I work - Monash Uni in
Melbourne - the nature of the contracts with Red Hat suggest, 
AFAICT, that 5.8.0

will be with us indefinitely :-(. The standard but simplistic reply, install
your own version of Perl, has of course long term maintenance problems of its
own, even if it were possible. So I'll be sticking with the DBI 
compiled by the

sys admin...


Yes, and people could say that about other specific versions too.

I also don't see why the Red Hat supplied distro can't be more up to 
date; in fact, I would expect any ongoing contract with them to 
include furnishment of up to date distros.


Even if you can't move, its not like 5.8 is becoming a hard 
dependency, rather just a soft dependency, as I recall.


-- Darren Duncan


ANNOUNCE: retro Rosetta for Friday the 13th

2006-01-13 Thread Darren Duncan

2006-01-13   Darren Duncan [EMAIL PROTECTED]
--

For this special day of Friday the Thirteenth of 2006, I would like 
to announce one last set of releases for the pre-rewrite Rosetta 
database access framework.


This retroactive release (which you can treat as if it came out on 
2005-09-29) is intended to help ease the transition between the first 
and second major code bases of the Rosetta database access framework, 
at least from the point of explaining what changed between them. 
(The first codebase was written mainly between 2002-12 and 2005-09, 
while the second rewritten code base was mainly between 2005-10 and 
later.)


The changes with this release are all summed up by that there is no 
longer a distinct SQL::Routine name space, and all packages that 
were in that name space are now in the Rosetta name space.  There 
are no actual functionality changes, just a shifting around of 
components.  This release makes it so that the core files of the 
pre-rewrite Rosetta framework have all of the same and/or compatible 
names and locations as what the post-rewrite replacements will be.


These are the component distributions (which should be on CPAN any hour now):

 * Rosetta-v0.71.0.tar.gz (result of 2-way merger)
- follows SQL-Routine-v0.70.3.tar.gz and Rosetta-v0.48.3.tar.gz
- SQL::Routine is now renamed to Rosetta::Model
 * Rosetta-Utility-SQLBuilder-v0.22.0.tar.gz (renamed)
- follows SQL-Routine-SQLBuilder-v0.21.2.tar.gz
 * Rosetta-Utility-SQLParser-v0.3.0.tar.gz (renamed)
- follows SQL-Routine-SQLParser-v0.2.2.tar.gz
 * Rosetta-Engine-Generic-v0.22.0.tar.gz (no name changes)
- follows Rosetta-Engine-Generic-v0.21.2.tar.gz

These have Locale-KeyedText-v1.6.2.tar.gz (actually released on 
2005-09-28) as a dependency.


As you can see, the most significant change besides the renaming is 
that the entire core of the Rosetta framework is once again in a 
single distribution (that which was SQL::Routine has been brought 
back in), which should make things easier for developers and users 
alike.


For future reference, the actual Subversion repository for the first 
major code base of the Rosetta framework is 
http://svn.utsl.gen.nz/trunk/Rosetta-old/ ; the bundled ReadMe files 
are different and out of date, for historical posterity.


Any day now, the first releases of the second major Rosetta code base 
will appear on CPAN; these will be released concurrently with Pugs 
v6.2.11, and separate announcement emails will accompany them.


Thank you and have a good day. -- Darren Duncan


Re: extra dependancies vs. testability?

2006-01-10 Thread Darren Duncan

At 7:26 PM -0800 1/10/06, Tyler MacDonald wrote:

Another option that occured to me while I was loading the dishwasher
was bundling SQLite2 (or something like it) in my t/ directory. Sure, it
will consume a bit of extra bandwidth when the package is downloaded, but it
won't leave the bloat around once it's tested, installed (or packaged up
into something like a .deb or .ppd), and the source tree is purged. But I
still find myself thinking there's got to be a better way...


Actually bundling SQLite is a waste of space for a CPAN distribution, 
partly because it is quite large relative to the site of your own 
code; like 10-100X.


As for answering your question, I suggest including a small config 
file with your distro which your test script looks at.  Users can 
change that if they are testing an existing database setup of theirs, 
and if they don't customize this file, your test script will default 
to using SQLite so that the test suite can still test your module 
itself.


That's what the current/old Rosetta::Engine::Generic on CPAN does 
with its test suite, defaulting to SQLite (3); you can look at its 
tests for ideas.


FYI, the not yet released Rosetta rewrite will be different, and 
include its own mock database to test with (called 
Rosetta::Engine::Native), so that testing the core can be done 
completely without any third party database.


Depending on your goals, it might be worthwhile for you to have a 
mock database.


But whatever you do, don't bundle third party things like SQLite, 
unless your distribution is private/proprietary and the recipients 
aren't going to CPAN.


-- Darren Duncan


Re: solved the RFC on SQL::Routine; new direction planned

2006-01-09 Thread Darren Duncan

Thanks for the further responses from Ron Savage and Jenda Krynicky.

As a follow-up, yesterday I decided to name the language IRL 
(Intermediate Relational Language), at least for now.


Its nice and short, and its very descriptive.  An intermediate 
language for expressing relational structures and operations.


The naming convention is similar to the PIL (Pugs Intermediate 
Language) used in the Perl 6 camp, and they serve the same sort of 
purpose.


The name IRL could also double as a short form of Intermediate 
Rosetta Language, but I won't be touting this as I may possibly 
rename Rosetta to something else in the future, though there's no 
real impetus to do so now.


-- Darren Duncan


RFC: renaming SQL::Routine out of SQL

2006-01-06 Thread Darren Duncan
 be 
reclassified on CPAN to a different category than string language 
processing; perhaps it should go where ever PPI is classified, 
though I don't know where that is, or perhaps where PDL is, or 
perhaps under database interfaces even though it only defines one 
and doesn't implement it.  (In any event, Rosetta sits under 
database interfaces and will reference it.)


Thank you in advance for the feedback.

-- Darren Duncan

P.S. The chosen names would be used for and identical in the perl 5 
and perl 6 versions of this.


solved the RFC on SQL::Routine; new direction planned

2006-01-06 Thread Darren Duncan
This message is a follow-up to the one of a few hours ago titled 
RFC: renaming SQL::Routine out of SQL.  The context of the old RFC 
has changed.


The one on-topic reply I got so far, from David Wheeler (which 
suggested renaming the SQL::Routine language to Rosetta), 
inspired me to take a more drastic step than I proposed before.


This would be to essentially turn the Rosetta framework on its head, 
where Rosetta becomes first and foremost the name of a language 
rather than a role-class for the implementation of the language.


But I think that's actually okay, since the language definition is 
actually the more important and more central part.  And since the 
definition is effectively a desugared super-set of existing/possible 
database languages, it may actually be more suited to the name 
Rosetta, maybe.


In the process, the functionality of the old SQL::Routine and 
Rosetta would be merged into a single CPAN distribution, rather 
than having them be two separate core distributions.  Having them be 
separate was good on the theory that the functionality of the old 
SQL::Routine was independently useful; while that reality hasn't and 
won't change, the odds of anyone taking advantage of it would be 
slim, so releasing the pair together simplifies my maintenance. 
People can still choose to only use part of the distribution if they 
want.  And the important separation of engines, wrappers, etc 
into separate distributions from the core is still strongly in effect.


So perhaps then the language can be named Rosetta.

Regarding the related module namespace, for now the existing 
Rosetta[|::*] modules can keep their current names and 
functionality, which are suitable, and the SQL::Routine[|::*] 
modules can be renamed like this:


  lib/SQL/Routine/Language.pod  - lib/Rosetta/Language.pod
  lib/SQL/Routine/Migration.pod - lib/Rosetta/Migration.pod
  lib/SQL/Routine.pm- lib/Rosetta/Model.pm
  lib/SQL/Routine/L/en.pm   - lib/Rosetta/Model/L/en.pm

As before, lib/Rosetta.pm will use lib/Rosetta/Model.pm, and the 
latter can be used by a third party without their using the former.


Now, while the details of its language have changed significantly, 
the overall purpose and intended situations in which the Rosetta 
framework would be used are still the same as before.


Similar to the original RFC, I wonder if Rosetta is a good name for 
the language + framework, or whether something else may be better. 
But at least the 'SQL' is gone, and so I'm not in any rush to do 
further renaming.


As always, I welcome feedback on Rosetta, but I no longer require 
renaming suggestions, though you are still welcome to propose some if 
you think they would be better than Rosetta.


Thank you. -- Darren Duncan


RE: [cgiapp] Best free DB for a web-based Perl app response results...

2005-12-01 Thread Darren Duncan

At 2:05 PM -0500 12/1/05, Jesse Erlbaum wrote:

I really don't think your SQL Lite analogy is a valid one.  Oracle,
PgSQL and MySQL are hugely popular.  SQL Lite is a skunk works with no
proven track record.

Quick Google hits check:

  2,250,000 for Oracle +rdbms
756,000 for MySQL +rdbms
384,000 for PostgreSQL +rdbms
207 for SQL Lite +rdbms

A bit of a straw man, wouldn't you say?


I just *had* to reply to this one.

You made one of the biggest argument mistakes possible, which is 
basing judgement on blatently incorrect data.


Namely, you spelled SQLite wrong, so of course Google wouldn't find it.

Given the correct spelling, you would get *101,000* hits for SQLite + 
rdbms.  While lesser than the others, its still the same order of 
magnitude as MySQL or PostgreSQL.


  2,260,000 for: Oracle rdbms
740,000 for: MySQL rdbms
431,000 for: Postgres rdbms (which also returns 'PostgreSQL' numbers
101,000 for: SQLite rdbms

SQLite is also far from a skunk works project and has a strong proven 
track record.


It has been around for a long time and being used in a huge number of 
applications and devices.  (Most recently, its even built into Mac OS 
X 10.4 for its Core Data component.)  Many uses aren't even 
advertised, since it is public domain and users don't have to say 
they're using it.


It also has dozens of active developers and a very busy mailing list.

Its inventer, D Richard Hipp, was also honored with a top award at 
OSCON this year due to SQLite being one of the strongest movers and 
benefits to the open source community.


So some criticism of SQLite is warranted, such a scalability with 
lots of writers, but not what you said.


-- Darren Duncan


Re: What's the best free DB for a web-based app?

2005-11-29 Thread Darren Duncan

At 5:02 PM -0800 11/29/05, John Armstrong wrote:

Hi - I'm soon to be doing a Perl app on the Internet, that'll need
database. We want the db to be as free as possible, but still fully
multi-user (web-based). Would the best route be MySql on Linux? Random
access files? Something else? We want no licensing obligations (no Oracle,
Sequal Server, etc.). We want to go with Perl because it's the best
programming language invented by humans.


If you want something that's trivially easy to use, try SQLite.

The database manager is embedded in the DBD::SQLite module itself, 
and you have no separate server processes to maintain, and you can 
specify a database's storage location like an ordinary file.


Its also faster than anything else for some types of usage, and 
unlike many typical MySQL setups, it is fully transactional, ACID 
compliant, and safe.


Certain kinds of usage with many writing processes may be slow, 
though, since only one active writer can access a SQLite database at 
once.


SQLite is also public domain, which is about as liberty as you can get.

-- Darren Duncan


Re: Announce: JDBC 0.01 (Yes, that's Java JDBC)

2005-10-05 Thread Darren Duncan

At 11:45 PM +0100 10/4/05, Tim Bunce wrote:

There's no particular forum for discussion related to this yet.
In the short term the dbi-users mailing list will do as the module
was created to help with the design of DBI v2 (see the docs).


Would the dbdi-dev@perl.org list that you created last year be an 
appropriate discussion place for this?  Or would that only be used 
after the JDBC-inspired discussion ends? -- Darren Duncan


ANNOUNCE: reboot of Rosetta/SQL-Routine development, in Perl 6

2005-09-30 Thread Darren Duncan

2005-09-30   Darren Duncan [EMAIL PROTECTED]
--

I would like to acknowledge that, despite all the good things that 
have come out of it, I have had some significant problems in regards 
to the past development of my Rosetta rigorous database portability 
framework.


(As a reminder of what this is, a terse, 5-minute introduction can be 
found at http://darrenduncan.net/OSCON/OSCON2005LightningTalk.txt ).


The main problem being that it is taking an order of magnitude too 
long to come out with a product that is actually useable for 
practical work.  I see one root cause of this being my packrat nature 
of not wanting to throw out portions of the project (a lot of that 
being documentation) that I already invested time in, since they 
could still be useful.  Another main problem was a persuit of 
perfectionism that left me spending too long on little details that 
could better have been put off until later, and this multiplied by 
the size of of a system that has a semblence of being 
over-engineered.  Also, in hind-sight it was a mistake to limit 
certain implementation decisions in the core due to a desire to make 
a C version later, which is a less expressive tool.


I have decided that now is an excellent time to start over and remake 
the system from the ground up.  But while this is an opportunity to 
change many things, I see the larger scale design of the previous 
version to still be sound, as well as its planned feature set, and so 
therefore the new version will look like the old one at a distance. 
The main difference will be in the details of the API and in the 
internals, plus the new documentation won't be so verbose.  Many 
valuable lessons were still learned in the first attempt to make this 
project, and will be applied in the new version.  In some respects, 
the new version will actually be akin to a large refactor of the old, 
and not so much a rewrite where the fundamental design and feature 
plan sees change.  However, all of the code and documentation is in 
fact being re-typed, so many things can be changed, even with the old 
version being a rough guideline to follow.


In this process, I also draw a lot of inspiration from Autrijus Tang 
and his Pugs project, which demonstrates ways to come up with 
something amazing in a short period of time, and spark strong 
interest and participation from the wider community, a killer app of 
development models perhaps.


So what are the practical benefits and effects of this reboot, then? 
Here are the most pertinent:


1. Development speed will be several orders of magnitude faster than 
before, measured by the rate at which actual useable features are 
made available to users, even when I'm the sole developer (as I was 
before).  You will actually be able to see progress being made, and I 
will have more personal satisfaction in my output.


2. Development will now proceed vertically first and horizontally 
second, rather than the reverse which was true before.  This means 
that at any given period of time, we will have a small number of 
features that are 100% complete and useable *now* vs a large number 
of features that are 80% complete and wholly unuseable, which is the 
case in the older system.


3. Since people will be able to actually use the system for 
something, whether work or play, it will be a lot easier for them to 
get excited about it, then turn around and participate in its further 
development.


4. Perl 6 is now the primary native platform of the Rosetta 
framework, and Rosetta's design and implementation will be driven by 
what is considered the most natural and best practices in Perl 6. 
This action both helps the development of Pugs itself through the 
extra integration testing it provides, but it also helps anyone that 
wants to make database using applications in Perl 6 to migrate there 
faster.  The Perl 6 version will not be a port of the Perl 5 version 
after the latter is done.


5. But because Perl 6 is still very much a moving target, and its 
implementations unstable, a native Perl 5 version will be developed 
simultaneously and be kept in full parity with it; this task is made 
a lot easier by the multiple modules on CPAN or bundled with Perl 5 
that give it functionality and syntax like Perl 6 has natively, such 
as 'version' and 'List::MoreUtils' and 'Class::Std'; but the Perl 5 
version should still be very stable and efficient, a good Perl 5 
citizen.


6. If you have ideas you want to contribute towards improving 
Rosetta, it will be a lot easier for me to adopt them now that the 
pot has been emptied.  And don't worry if you think that your ideas 
were already in my un-stated plans; its always nice to have 
confirmation of an idea's validity.


7. The code will be visibly smaller and more nimble.

8. Certain classes of features will be available sooner than otherwise.

9. Lastly, I can finally start to build other products of my own that 
employ

Fwd: Announcing MySQL 5.0 Release Candidate

2005-09-26 Thread Darren Duncan
Users of the MySQL database manager should find 
this interesting.  MySQL 5 is in release 
candidate status.


Besides the obvious big ticket new features, I 
would find the information schema and strict 
mode etc features very useful.  The former is a 
very good and standard way to get meta-data on 
your tables and such; meta-data being something 
that lots of DBI wrapper modules need to operate.


-- Darren Duncan



Date: Mon, 26 Sep 2005 19:41:07 +0300
From: Kaj Arnö [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Announcing MySQL 5.0 Release Candidate

Dear user of MySQL,

I'm proud and excited to announce the first Release Candidate of MySQL
5.0. This milestone signals that we are nearing what is certainly the
most important release in MySQL's history.

MySQL 5.0 has new functionality that I hope will be welcomed, adopted,
and put to productive use by the community of MySQL users -- you. On the
commercial side, MySQL AB is getting a lot of good vibes from new
enterprise customers who are beginning to understand the impact MySQL
can have on their IT infrastructure and costs of running
mission-critical applications.

The betas of MySQL 5.0 have already been downloaded two million times,
and have thus been tested extensively in a lot of different scenarios.
From the feedback we get from our community, we know it is already in
production use at numerous sites. Go get your own copy at
http://dev.mysql.com/downloads/mysql/5.0.html

Without question, MySQL 5.0 is the most ambitious release to date for
MySQL AB. Of course, everything we do at MySQL centers around our three
priorities of Performance, Reliability, and Ease of Use.  Version 5.0 of
MySQL is certainly true to these company-wide imperatives.

Key new features of MySQL 5.0 come in three groups:

a) ANSI SQL standard features formerly unknown to MySQL
b) ANSI SQL standard compliance of existing MySQL features
c) New MySQL Storage Engines, Tools and Extensions

The new ANSI SQL features include:

- Views (both read-only and updatable views)
- Stored Procedures and Stored Functions, using the SQL:2003 syntax,
  which is also used by IBM's DB2
- Triggers (row-level)
- Server-side cursors (read-only, non-scrolling)

Implementing ANSI SQL standard ways of using existing MySQL features
means there will be fewer unpleasant surprises (gotchas) for those
migrating to MySQL from other database systems:

- Strict Mode: MySQL 5.0 adds a mode that complies with standard SQL
  in a number of areas in which earlier versions did not; we now do
  strict data type checking and issue errors for all invalid dates,
  numbers and strings as expected
- INFORMATION_SCHEMA: An ANSI SQL-compliant Data Dictionary for
  accessing metadata, in parallel to the MySQL specific SHOW
  commands
- Precision Math: A new library for fixed-point arithmetic, giving
  high accuracy for financial and mathematical operations
- VARCHAR Data Type: The maximum effective length of a VARCHAR column
  has increased to 65,532 bytes; also, stripping of trailing whitespace
  no longer occurs

New MySQL Storage Engines, Tools and Extensions are:

- XA Distributed Transactions
- ARCHIVE Storage Engine for storing large amounts of data without
  indexes in a very small footprint, intended for historical data that
  may be needed for future audit compliance (Sarbanes Oxley or
  otherwise)
- FEDERATED Storage Engine for accessing data ín tables of remote
  databases rather than in local tables (only in MAX version)
- Instance Manager: a tool to start and stop MySQL Server, even remotely

To find out more details on what's new in MySQL 5.0, follow the pointers
from http://dev.mysql.com/doc/mysql/en/mysql-5-0-nutshell.html

To find out the changes specific to MySQL 5.0.13 
in relation to 5.0.12, see 
http://dev.mysql.com/doc/mysql/en/news-5-0-13.html


MySQL 5.0 is also reflected in our GUI tools and Connectors:

MySQL Administrator 1.1.3 and MySQL Query Browser 1.1.15 are aware of
the new MySQL 5.0 features, can be used to write and test stored
procedures, create views, include them in scheduled backups and much more.

The latest shipping versions of our Connectors work with MySQL 5.0, and
all connectors (MySQL Connector/ODBC, Connector/J and Connector/NET)
will support all flagship features before 5.0 goes GA.

We're pleased that we've reached a point of stability where it's been a
while since we received a significant inflow of bugs that drastically
impacted a large number of users. Of course, we recognize we haven't
crossed the finish line yet, so we still very much need your involvement
to ensure that MySQL 5.0 is the best that it possibly can be.

With MySQL 5.0 being that stable, I encourage you to do all of your new
database development using MySQL 5.0:

1. Download 5.0 from http://dev.mysql.com/downloads/mysql/5.0.html
2. Use the new features you need
3. Report any issues you find through our bug-reporting system at
   http://bugs.mysql.com/.

To show our appreciation

Re: ANNOUNCE:: DBIx::MyParse 0.20

2005-09-21 Thread Darren Duncan

At 10:52 AM -0700 9/21/05, Jeff Zucker wrote:
The My in MySQL is the name of the daughter of one of the 
founders (Monty IIRC) and has nothing to do with the English word 
my.  But don't let me interrupt your rant :-).


Well, you learn interesting new trivia each day; I can use that one; 
thanks. -- Darren Duncan


ANNOUNCE: Rosetta/SQL-Routine developer release #3

2005-09-10 Thread Darren Duncan

2005-09-10   Darren Duncan [EMAIL PROTECTED]
--

I am now pleased to announce the third developer release of my 
Rosetta rigorous database portability library, currently featuring 
the SQL::Routine module.  (For reference, the second developer 
release was on 2005-04-03, and the first developer release was on 
2004-08-07.)


The module set that can be broadly referred to as the 'Rosetta 
Framework' is composed of the following 6+ distributions; all of them 
are in native Perl 5 versions, and native Perl 6 versions are being 
co-developed beside Pugs (see its /ext directory for the latter).


A terse, 5-minute introduction to the framework as a whole can be 
found at http://darrenduncan.net/OSCON/OSCON2005LightningTalk.txt .


These native Perl 5 distributions are on CPAN now, 
http://search.cpan.org/~duncand/ :


  1. Locale-KeyedText-1.6.0.tar.gz
  2. SQL-Routine-0.70.0.tar.gz (dep on 1)
  3. Rosetta-0.48.0.tar.gz (dep on 1, 2); also holds Rosetta::Validator
  4. SQL-Routine-SQLBuilder-0.21.0.tar.gz (dep on 1, 2)
  5. SQL-Routine-SQLBuilder-0.2.0.tar.gz (dep on 1, 2)
  6. Rosetta-Engine-Generic-0.21.0.tar.gz (dep on 1, 2, 3, 4, 5)

For the bleeding edge, check out my new public Subversion repository 
(donated by Sam Vilain / mugwump), at 
http://svn.utsl.gen.nz/trunk/Rosetta/ .


Now, keep in mind that the over-all Rosetta Framework is in pre-alpha 
development status and DOES NOT WORK yet (but that the 
Locale-KeyedText component is complete and DOES work right now).  The 
purpose of this developer release announcement is to remind 
interested people that they can look at my current development 
snapshot for reasons of study and feedback.


I welcome and encourage any and all feedback on these modules that 
you can give me.  It would be VERY helpful to me.  I am also grateful 
if you can find these modules useful in your own work.  Likewise, if 
you have any questions, I will do my best to answer them.  If you 
wish to help with their development in some way, I am also open to 
such offers.


--

This list indicates some of the more significant changes to the 
framework since the second developer release, as I recall them from 
memory.  For more details of this delta period, see the 'Changes' 
files that come with each of the above distributions.


* Starting with the current release set (2005-09-08), all modules and 
distributions now have three-part version numbers, which are more 
expressive, rather than floating point version numbers.  Related to 
this, all modules also have gained dependencies on the 'version' and 
'only' modules, which are available on CPAN; their functionality is 
built-in to Perl 6.


* There is now a public Subversion repository for all of these modules.

* 'SQL::Routine' and 'Rosetta' have gained a dependency on 
Scalar::Util (bundled with Perl) and use its weaken() function; 
object trees for each module will now auto-destruct normally, despite 
having circular references between them, and users no longer have to 
explicitly destroy() them.


* SQL::Routine::SQLBuilder now has a test suite and is actually known to work.

* 'SQL::Routine' has been simplified greatly.  Nodes can no longer 
exist outside Containers, so there's no extra add/remove steps.  The 
multiplicity of attribute-type specific Node attribute accessors are 
gone, with only more generic accessors remaining.  The build_* 
methods now take a more terse input format, composed of mixed array 
and hash refs, rather than all hash refs.


* SQL::Routine has gained a number of security features, making it 
easier to share a model by reference between various program 
sections, without them stepping on each other.  For example, one 
program section can place a read-lock on sections of the model it is 
generating actions from, so other sections can't change those and 
invalidate it.


* All files are now indented with spaces rather than tabs.

* Various POD sections have been reorganized/renamed/added, now 
better resembling the recommendation given by Damian Conway's Perl 
Best Practices.


* 'Rosetta' has had huge changes to its API, to make it easier to use 
and more secure.


--

This list indicates some of the most significant TODO items, which I 
expect to produce over the next few weeks, to end up with the fourth 
developer release.


* Make another large change set to the Rosetta API, which includes 
adding more Interface object types, making it more secure, and 
splitting 'compile' and 'prepare' apart from each other.


* Add functionality to Rosetta::Engine::Generic so that the framework 
as a whole can actually be used for something.


* Add a small demo app that demonstrates its being useful.

* Convert all objects to the inside-out format described in Best 
Practices, so to make it impossible for users to circumvent the API 
and mess with an object's properties directly; such users are the 
bane of many maintainers.


* Change the SQL routine

Re: DBI v2 - The Plan and How You Can Help

2005-08-16 Thread Darren Duncan

At 4:04 PM +0100 8/16/05, Tim Bunce wrote:

I was a little dissapointed that there wasn't greater focus on using
Perl6 features - especially as it would have helped kick-start my own
understanding of Perl6 topics that I expect to be significant (such as
Roles and Pairs, to pick two at random). Perhaps the community of
Perl6+DBI users is too small at this point.


One way that the Perl 6 thought process can be started is in 
considering the design principles laid out in Damian's new Best 
Practices book.  I said to Damian at OSCON that I thought the 
practices he was putting forward were intended to get people thinking 
now in Perl 5 about ways of doing things that will be the natural way 
of doing them in Perl 6; he said something along the lines that I had 
good insight.  So these practices are probably some good things to 
keep in mind as we move forward.


Now, speaking specifically in Perl 6 terms ...

I suggest that DBI v2 has a more formal separation between interface 
and implementation.  The parts of DBI can be grouped into these 
categories:


1. Role definitions for the public behaviour/API that DBI-using apps see.

2. Role definitions for the behaviour/API that DBI drivers/engines must have.

3. Class definitions that implement #1 and invoke #2.

4. Class definitions having a generic implementation of #2 or parts 
thereof, which a driver/engine can complete or override.


5. Basic utility classes that exist to the side of the above, which 
such as DBI drivers can optionally use to do some common things 
without rolling their own.


6. A basic test suite.

I also recommend expelling some parts of the DBI distro into their 
own distros and/or leaving them to third parties.  A prime example is 
the proxy server/client stuff; that should be a separate project.


-- Darren Duncan


Re: [Catalyst] DBIx-Class list, cdbi-talk closure (fwd)

2005-07-29 Thread Darren Duncan

At 2:23 PM -0400 7/28/05, Christopher Hicks wrote:

-- Forwarded message --
Date: Thu, 28 Jul 2005 17:47:46 +0100
From: Matt S Trout [EMAIL PROTECTED]

Ok, due to popular demand I've got gabb to set up [EMAIL PROTECTED]
for DBIx-Class discussion (thanks gabb!). I've stopped hacking on it for
a few days while I think through where I want to go with the code, so now
would be the time to make design/feature suggestions!

Also, with the recent unexpected closure of cdbi-talk I (and I guess most of
the other people involved) would be happy to answer Class::DBI questions on
there, at least as a stop-gap measure until a new dedicated cdbi list turns
up.
 Matt S Trout   Website: http://www.shadowcatsystems.co.uk
  Technical DirectorE-mail:  mst (at) shadowcatsystems.co.uk
Shadowcat Systems Ltd.


I wrote Matt S Trout (also cc'd now) partly on this topic, and he 
asked me to forward this following extra information to dbi-users. -- 
Darren Duncan




Date: Fri, 29 Jul 2005 03:40:42 +0100
From: Matt S Trout [EMAIL PROTECTED]

On Thu, Jul 28, 2005 at 06:45:03PM -0700, Darren Duncan wrote:

 FYI, I noticed your contact info in a message forwarded to dbi-users
 concerning a DBIx-Class module that seems to be related to Class-DBI.


Well, it *was* a research project, but since Tony Bowden decided to
delete the cdbi-talk list and wiki, what happens next I can't say anymore
- the refugees from there are all welcome on the dbic list, and I appear
to already have 50 subscribers.

The aim is to produce a cdbi-style more pluggable (via MI and NEXT.pm)
resultset (rather than table, i.e. native views etc.) oriented ORM with
a cdbi compatibility interface for easy migration (and as a good way to
exercise the codebase). It ain't production-ready yet but it's coming along
nicely.

List is at http://lists.rawmode.org/mailman/listinfo/dbix-class

SVN is at http://dev.catalyst.perl.org/repos/bast/trunk/DBIx-Class/

I should probably get on dbi-users, but my mailbox is full enough atm. Could
I perhaps put upon you to knock out a quick reply to that forward outlining
what I've just said (poss. just C+P-ing from The aim)?


RE: DBIx::Recordset and Oracle

2005-07-27 Thread Darren Duncan

At 10:17 AM +0100 7/27/05, Tielman de Villiers wrote:

One of the biggest reasons I chose to use DBIx::Recordset is for its
start (aka offset) and max (aka limit) search parameters. Most of
the web applications I work with, simply requires paginated selects
from a multitude of databases. Using the DBI posed no problems as long
as the databases had OFFSET and LIMIT capabilites, but the notable
exceptions were Informix and Oracle. My choice was then to either select
all rows and do the limiting and offsetting at the application level,
or having seperate SQL for each database. DBIx::Recordset seemed to be a
good alternative, and I have been using it for a while now without
problems. I only hit upon this latest problem (of DBIx::Recordset
collecting metadata) when I was testing against a rather large and
better secured (ie, user permissions, schemas, etc) database.


Well, the best solution is still to get Oracle to do all the work. 
Back around 4 years ago, I used a double-subquery to emulate the 
non-existent limit/offset functionality, when I wanted paginated 
selects with Oracle 8i.


Here's an Oracle-native string SQL example, which may contain errors:

  SELECT foo, bar
  FROM (
SELECT foo, bar, rownum AS my_row_num
FROM (
  SELECT foo, bar
  FROM baz
  ORDER BY foo
) AS baz
  ) AS baz
  WHERE my_row_num = :min_index AND my_row_num = :max_index

As I recall, 'rownum' is the name of a special Oracle variable that 
gives the ordinal value for a row in the source table, prior to 
sorting; if not, rename that keyword as appropriate.


Note that getting ordered results from a table is always slow when 
what you are sorting is a large set, even if you only return a few of 
them using the method I described.  It's always a good practice to 
filter with WHERE down to, say 1000 rows which are the ones that get 
paginated.


I suspect this won't be necessary with newer Oracles, since the 
SQL:2003 standard includes a window clause, part of which defines 
pagination I think.


-- Darren Duncan


RE: DBIx::Recordset and Oracle

2005-07-26 Thread Darren Duncan

At 11:05 AM -0700 7/26/05, Drozdowski, Catharine wrote:

Hi Folks,
First of all, I want to second the database independence comment. As an
Oracle DBA who has watched application vendors try to create
applications which run on all databases I have seen anything from an
ISAM format kludged into a relational model which ran slower than a dead
dog no matter what hardware the mgnt threw at it To modern
applications, no names here, who literally waste the oracle database
because they do not use any of its features and dump all the stuff which
would fly on a database server down the the client / application tier
where is also like a very bloated dead dog My 2cents


What you describe isn't intrinsic to database independence in 
principle.  Rather, it describes certain implementations that chose 
to work to the least common denominator for native database feature 
use, and re-implemented the rest themselves.  By contrast, 'Rosetta' 
will exploit Oracle's and other powerful database's native 
capabilities to the fullest, so all the work is still done by them 
that should be.  I very much believe in not wasting resources.



While not meaning this next comment in a personal manner, in my very
humble opinion, we need to be aware of dumbing down the DBI and
providing new perl users with little incentive to be come experts at the
ins and outs of their database of choice...knowing the sql words does
not a database expert make...On the other hand in our modern
say,learn,know in a day world, perhaps being an expert is an outdated
concept


I agree.  A user should know about at least the main distinctions 
between different database products.  Regardless of any portability 
of their code, they still have to make a decision for which one to 
use, so they have to know which will serve their needs best to make 
that decision.


-- Darren Duncan


  1   2   >