Hi Greg. Well I'm kind of half way but I think what I am doing could work out.

I have an iso_languages table, a languages table for languages used and a multi_language table for storing values of my text fields. I choose my language from iso_languages. Any table that needs a multi_language field gets one by id with referential integrity with a multi_language table id since this is a direct relationship. Thanks for the idea of using array BTW. Referential integrity could not work with my first model.

I am taking the array text and parsing the result in python to get the key positions. This is possible with a query using string_array function and getting text from any multi_language field. Then I put result into a dictionary (an array) and get length and add one to get new key value that is added when a new language is added. Using this key an array is added to existing array to each row of multi_language table (in lang_code_and_text) field. So the length of the main array in multi-demensional array grows by one array
for the language for each record in multilanguage table.

I can also seek the english (en) value so that I will be able to use english as default text for the new language and inserting a new array for that language into the lang_code_and_text array. For example, if spanish (es) added the new key is 3 so insert for each record so have something like this now:

1, {{'en','the brown cow'},{'fr','la vache brun'},{'es','the brown cow'}} 2, {{'en','the blue turkey'},{'fr','la dandon bleu'},{'es','the blue turkey'}}

In my forms, I am using a template to display entry fields for each language used. The english will be default for new languages added so there is something in these fields to start with and it should update properly based on correct key values. In my languages table, I am storing the current key positions for each language used in my app. I have an i18 layer for zope and based on language code I will
pass language id so you see right language in interface and data both.

When updating or deleting records, I am will be making a trigger to remove the array that represents a translation after update. Then it has to update my language table to provide updated key values for my languages. I am working on my first functions and triggers in plpgsql. This is where I may need help from the
list if I get stuck but so far so good!

Well so far so go but not finished yet. Does anyone have any comments on scalability. I don't really see a problem since there really is not any risk of my needing any more than 10 - 15 languages or so max out of maybe 300 languages in the world. I think 15 entries in an array is very small so can't see any reason for this not to
work well.


I think my table will be pretty simple;
CREATE TABLE multi_language (
        id                                              SERIAL,
        lang_code_and_text            TEXT[][]
);

So records would look like:

1, {{'en','the brown cow'},{'fr','la vache brun'}}
2, {{'en','the blue turkey'},{'fr','la dandon bleu'}}

That's a lot more complicated than my model.

Postgres doesn't have any functions for handling arrays like these as
associative arrays like you might want. And as you've discovered it's not so easy to ship the whole array to your client where it might be easier to work
with.


Yes. This is a bit complicating since if they were there it would be really
nice to work with arrays.


I just have things like (hypothetically):

CREATE TABLE states (
  abbrev        text,
  state_name    text[],
  state_capitol text[]
)

And then in my application code data layer I mark all "internationalized columns" and the object that handles creating the actual select automatically
includes a "[$lang_id]" after every column in that list.

The list of languages supported and the mapping of languages to array
positions is fixed. I can grow it later but I can't reorganize them. This is
fine for me since pretty much everything has exactly two languages.

That is pretty cool. The only advantage in what I am doing will have is that you will be able to add languages at any time and there will be no huge load on postgres as far as I can tell since multilanguage table is a table is only two fields and one record for each multi-language field referenced from my other other tables and calls to it are
direct by id.  I think this should work but it is a puzzler for sure!

Regards,
David

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to