Hi everyone,
We have a problem with the UnicodeText type that stores data in an
Oracle NCLOB column.
When we use a unicode string in python to write data in the table, we
are limited to 4000 characters while it is possible to store a lot
more characteres (6000 in the following example) using a
Thank you Michael for your answers, but
This does not work even with the UFT-8 encoding.
I changed my program with utf-8:
=
# -*- coding: utf-8 -*-
from elixir import *
from sqlalchemy import create_engine
class TestEnum(Entity):
Hello,
Here is my problem:
I would define a domain of values for a table field. Some values
have accented characters (non-ASCII character).
When generating the DDL for Elixir, a conversion is made on the
values provided by the accented characters and no longer.
Here is my test code: