Re: [sqlite] request to become co-maintainer of DBD::SQLite

2009-01-16 Thread yair lenga
Hi,

I would like to highlight the fact the in large corporations, bumping
DBI to new version is a major issue, as the module serve as a
foundation for hundreds of applications, which must be retested on every
change. As a result, large companies will bump DBI version every few
years.

Also, large companies usually prefer to use vendor provided software.
Red Hat 4 is bundled with DBI 1.40, and Red Hat 5 is bundled with 1.52.
While this may not be the latest and greatest, this is the reality for
many development projects.

My 2 cents - If possible, DBD drivers should be compatible with older
version as long as practically possible. This will make newer SQLite
versions viable option for most projects.

Yair




 -Original Message-
 From: Darren Duncan [mailto:dar...@darrenduncan.net]
 Sent: Wednesday, January 14, 2009 9:19 PM
 To: General Discussion of SQLite Database; DBI Dev
 Subject: Re: [sqlite] request to become co-maintainer of DBD::SQLite

 These are replies to posts on the sqlite-users list.  However, if there
 is going to be ongoing discussion I prefer it happen on the dbi-dev
 list.  Not that sqlite-users isn't very on topic itself, dbi-dev just
 seems *more* on topic, I think.

 Clark Christensen wrote:
  One of my first code changes will be to require DBI 1.607+
 
  The current DBD-SQLite works fine under older versions of DBI.  So
 unless there's a compelling reason to do it, I would prefer you not make
 what seems like an arbitrary requirement.

 I have 2 answers to that:

 1.  Sure, I can avoid changing the enforced dependency requirements for
 now, leaving them as Matt left them.  However, I will officially
 deprecate support for the older versions and won't test on them.  If
 something works with the newer dependencies but not the older ones, it
 will be up to those using or supporting the older dependencies to supply
 fixes.

 2.  On one hand I could say, why not update your DBI when you're
 updating DBD::SQLite, since even the DBI added lots of fixes one should
 have.  On the other hand, I can understand the reality that you may have
 other legacy modules like drivers for other old databases that might
 break with a DBI update.  I say might, since on the other hand they
 might not break.  Still, I'll just go the deprecation angle for now.

  Otherwise, it sounds like a good start.  Matt must be really busy with
 other work.
 
  I'll be happy to contribute where I can, but no C-fu here, either :-(

 Thank you.

 Ribeiro, Glauber wrote:
   My only suggestion at the moment, please use the amalgamation instead
 of   individual files. This makes it much easier to upgrade when SQLite
  releases a new version.

 Okay.

 Jim Dodgen wrote:
   I'm for the amalgamation too.  the rest of you ideas are great also.
   excelent idea to use Audrey Tangs nameing convention.
  
   I have been stuck back at 3.4 for various issues.
  
   I do Perl and C and offer some help.

 Okay and thank you.

 -- Darren Duncan





Re: Async I/O with DBI?

2009-01-16 Thread Marc Lehmann
On Thu, Jan 15, 2009 at 10:01:32PM +, Tim Bunce tim.bu...@pobox.com wrote:
 Marc, what would need to be added to the DBI (or DBD::Gofer) to support
 asynchronous use via the Coro module?

Method 1, add hooks to DBD::Gofer:

Right now, the only sensible way to go is to use DBD::Gofer and add some
hooks. Basically, each time you block, e.g. in select or read (or write,
if pipelining is to be an option), you need to use some event mechanism
to wait.

There are many ways to achieve that. Coro has an optional (slowish) wrapper
around select, so instead of doing:

   print $fh $data;

You could do:

   select  $fh writable; # slightly messy
   print $fh $data;

Method 2: use unblock or subclass:

Another option is to let the user optionally modify the filehandle, e.g.
in addition to:

   nonblock($rfh);

one could do:

   $rfh = $user_callback_to_condition_fh-($rfh)

and the user callback could be:

   sub { Coro::Handle::unblock $_[0] }

unblock returns something that acts like a perl file handle on the perl
level, but allows other coroutines to schedule when blocking on it.

Yet another option would be to write a DBD::Gofer::Transport::corostream
which would hardcode the above. It probably should be part of the Coro
module itself (and was on my todo list). Now that I looked at it, it might
be as trivial as subclassing ::stream and overriding start_pipe_command.

The only added complexity is that DBD::Gofer suddenly might receive two
concurrent requests on the same backend - this would not work without some
synchronisation, but a simple workaround would be to simply disallow
that, i.e. you must not make concurrent calls on the same gofer object
without locking yourself, which, in my experience, is enough.

Method 3: AnyEvent::DBI

Now, one can already use DBI asynchronously: AnyEvent::DBI is an
event-based DBI interface (in its infancy), which incidentally also
supports pipelining. It's use in Coro is trivial (or maybe not, the
following is untested :)

   use Coro;
   use AnyEvent::DBI;

   my $dbh = new AnyEvent::DBI DBI:SQLite:dbname=test.db, , ;

   $dbh-exec (select * from test, 10, rouse_cb);
   my ($rows, $rv) = rouse_wait;

Method 4: patch each and every dbi driver

All the above solutions need a proxy process. It would be vastly faster if
we could do it in-process.

In fatc, I am researching thread support for dynamic languages for almost
a decade now, and my verdict is, it cannot be done, and it makes no
sense, unless you do it to make existing interfaces (e.g. sysread or dbi)
non-blocking that way.

For this, you don't need concurrent access to perl variables.

So to get this to work, e.g. in the case of mysql, one needs to add hooks
to the driver. Assume DBD::mysql has some mysql_execute function that gets 
called like this:

   convert_perl_values_to_mysql_values ();
   mysql_execute ();
   convert_mysql_resaults_to_perl_values ();

This would need to be changed to:

   convert_perl_values_to_mysql_values ();
   release_perl_interpreter_to_do_other_things ();
   mysql_execute ();
   lock_perl_interpreter_again ();
   convert_mysql_resaults_to_perl_values ();

This is basically how python, ruby etc. works, which have thread support,
which perl has not.

To do this, Coro would need to be changed: currently it is a n:m model, i.e.
you can have any number of perl (cooperative) threads on any number of
C cooperative threads.

I have vague plans to change this into a three layer model, there you had any
number of perl threads running on a number of C cooperative threads, running
in turn on a number of kernel threads.

In that case, one could temporarily give control over the curent kernel
thread to e.g. libmysql, while the perl interpreter continues to run on
another kernel thread.

This actually does work already *iff* Coro is configured to use kernel
threads, which it has to use on the dreaded broken bsd platforms, but
which incurs roughly a 12 times slowdown, so on Linux/Solaris or other
working platforms, Coro uses more efficient userspace threads.

This last model is one I would prefer, as it combines the strengths
of threads (fast inter-thread communicaitons for the sql data) while
avoiding its overheads (threads are slow and not well-suited for parallel
processing).

=

This might be a bit longer than you expected, but the idea is, DBI already
does work in a fashion (AnyEvent::DBI), and DBD::Gofer could be changed in
various ways to make this easier. I even have long-term plans to introduce
real kernel threads to perl in the same way as other scripting languages
support it, but that's strictly for the future.

-- 
The choice of a   Deliantra, the free code+content MORPG
  -==- _GNU_  http://www.deliantra.net
  ==-- _   generation
  ---==---(_)__  __   __  Marc Lehmann
  --==---/ / _ \/ // /\ \/ /  p...@goof.com
  -=/_/_//_/\_,_/ 

Re: [sqlite] request to become co-maintainer of DBD::SQLite

2009-01-16 Thread Hildo Biersma
I am not sure agree.  Companies that don't upgrade DBI releases are 
unlikely to upgrade DBD drivers more frequently; and they're always free 
to use older DBD releases.  We don't want to hold developers hostage to 
the tendency of a few companies to be slow in upgrades.


At my workplace, a large corporation, we make multiple DBI and DBD::xxx 
releases available, and applications can choose their own versions. 
It'd be unfortunate if useful new DBI features would not be used by 
current DBD::xxx releases.


That's not to say that incompatibility should be introduced just for 
fun.  But if a DBD driver wants to use a new DBI feature, and that 
breaks compatibility with older DBI releases, the DBD driver author 
should go ahead.  The Makefile.PL file for the DBD module will specify 
the minimal DBI release required.


yair lenga wrote:

Hi,

I would like to highlight the fact the in large corporations, bumping
DBI to new version is a major issue, as the module serve as a
foundation for hundreds of applications, which must be retested on every
change. As a result, large companies will bump DBI version every few
years.

Also, large companies usually prefer to use vendor provided software.
Red Hat 4 is bundled with DBI 1.40, and Red Hat 5 is bundled with 1.52.
While this may not be the latest and greatest, this is the reality for
many development projects.

My 2 cents - If possible, DBD drivers should be compatible with older
version as long as practically possible. This will make newer SQLite
versions viable option for most projects.

Yair




-Original Message-
From: Darren Duncan [mailto:dar...@darrenduncan.net]
Sent: Wednesday, January 14, 2009 9:19 PM
To: General Discussion of SQLite Database; DBI Dev
Subject: Re: [sqlite] request to become co-maintainer of DBD::SQLite

These are replies to posts on the sqlite-users list.  However, if there
is going to be ongoing discussion I prefer it happen on the dbi-dev
list.  Not that sqlite-users isn't very on topic itself, dbi-dev just
seems *more* on topic, I think.

Clark Christensen wrote:

One of my first code changes will be to require DBI 1.607+

The current DBD-SQLite works fine under older versions of DBI.  So

unless there's a compelling reason to do it, I would prefer you not make
what seems like an arbitrary requirement.

I have 2 answers to that:

1.  Sure, I can avoid changing the enforced dependency requirements for
now, leaving them as Matt left them.  However, I will officially
deprecate support for the older versions and won't test on them.  If
something works with the newer dependencies but not the older ones, it
will be up to those using or supporting the older dependencies to supply
fixes.

2.  On one hand I could say, why not update your DBI when you're
updating DBD::SQLite, since even the DBI added lots of fixes one should
have.  On the other hand, I can understand the reality that you may have
other legacy modules like drivers for other old databases that might
break with a DBI update.  I say might, since on the other hand they
might not break.  Still, I'll just go the deprecation angle for now.


Otherwise, it sounds like a good start.  Matt must be really busy with

other work.

I'll be happy to contribute where I can, but no C-fu here, either :-(

Thank you.

Ribeiro, Glauber wrote:
  My only suggestion at the moment, please use the amalgamation instead
of   individual files. This makes it much easier to upgrade when SQLite

releases a new version.

Okay.

Jim Dodgen wrote:
  I'm for the amalgamation too.  the rest of you ideas are great also.
  excelent idea to use Audrey Tangs nameing convention.
 
  I have been stuck back at 3.4 for various issues.
 
  I do Perl and C and offer some help.

Okay and thank you.

-- Darren Duncan