Hi Merijn,

I thought a bit about your data dictionary goals together with my
goal to can join between CSV-tables and DBD-tables (etc.).
I kept in mind, that backward compatibility is a major requirement.

We will have a problem to migrate existing tables into the new
data dictionary. Maybe I found a way out ...

1) Our Meta-DBD's will support methods 'init_valid_attrs ()' and
   'init_default_attrs ()' (most of them do it now).
2) Currently, DBD::File provides the storage and the derived DBD's
   provide some kind of parser.
   If we could clarify this a bit better (AnyData has a similar
   internal design and could need some abstraction help from DBD::File),
   we should be able to provide different storage backends.
3) DBD::CSV and DBD::DBM are very simple now - most of the code is
   abstracted in DBI::DBD::SqlEngine and DBD::File. The remaining
   code is mostly "parser" related (AnyData point of view).

We should be able to rewrite DBD::File, DBD::CSV and DBD::DBM
to have their parser code and attribution code in some role 'classes'.

Using another Meta-DBD based on DBI::DBD::SqlEngine, those roles
could be aggregated and the appropriate table could be initialized.
Because we are starting fresh here, we can build data dictionaries
from the beginning - no need to migrate anything.

How does it sound?

Best,
Jens

Reply via email to