> 
> On Fri, Jul 5, 2013 at 2:35 PM, Pavel Stehule
<pavel.steh...@gmail.com> wrote:
> > Yes, what I know almost all use utf8 without problems. Long time I
> > didn't see any request for multi encoding support.
> 
> Well, not *everything* can be represented as UTF-8; I think this is
> particularly an issue with Asian languages.
> 
> If we chose to do it, I think that per-column encoding support would
end up
> looking a lot like per-column collation support: it would be yet
another per-
> column property along with typoid, typmod, and typcollation.  I'm not
entirely
> sure it's worth it, although FWIW I do believe Oracle has something
like this.

Yes, the idea is that users will be able to declare columns of type
NCHAR or NVARCHAR which will use the pre-determined encoding type. If we
say that NCHAR is UTF-8 then the NCHAR column will be of UTF-8 encoding
irrespective of the database encoding. It will be up to us to restrict
what Unicode encodings we want to support for NCHAR/NVARCHAR columns.
This is based on my interpretation of the SQL standard. As you allude to
above, Oracle has a similar behaviour (they support UTF-16 as well).  

Support for UTF-16 will be difficult without linking with some external
libraries such as ICU. 


> At any rate, it seems like quite a lot of work.

Thanks for putting my mind at ease ;-)

Rgds,
Arul Shaji


> 
> Another idea would be to do something like what we do for range types
> - i.e. allow a user to declare a type that is a differently-encoded
version of
> some base type.  But even that seems pretty hard.
> 
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL
Company




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to