Vadivel Subramaniam wrote:

I assume, it could not be a problem with ODBC. I am able to store 2.5 MB of
data into Oracle using the same ODBC APIs.

Well, it certainly isn't to do with PG itself:

$ cat bigtest.pl
#!/usr/bin/perl
print "CREATE TABLE foo (a int, b varchar);\n";
print "INSERT INTO foo VALUES (1, '".('a' x 5000000)."');\n";

$ perl bigtest.pl | psql -Urichardh richardh
CREATE TABLE
INSERT 3634376 1
$ psql -Urichardh richardh

richardh=> SELECT a,length(b) FROM foo;
 a | length
---+---------
 1 | 5000000
(1 row)


Only difference is that Oracle supports CLOB datatype, that is mapped to
LONG VARCHAR in ODBC.

The input parameter during insert is bound using the below ODBC API
      retStatus = SQLBindParameter(dStmtHandle, 1, SQL_PARAM_INPUT,
            SQL_C_CHAR, SQL_LONGVARCHAR, dLen, 1, pData, 0,  &dLen);

For PostGres also the same API is used.  Here varchar/text at the Postgres
schema is mapped to LONGVARCHAR at ODBC.
We can not go for bytea etc, since the data is in character format.

I tried using "text" also at the postgres schema, still the impact is same
(only stores around 64K).
Could you please provide some example, how the large character data ( more
than 1 MB ) can be inserted to text/varchar (in PostGres) using ODBC?

Hmm - looking back at your original email, I see you're using the UnixODBC driver. Might be worth trying the PG version:
 http://gborg.postgresql.org/project/psqlodbc/genpage.php?downloads

First step though, should be to check whether there are any configuration settings that apply to longvarchar, because it sounds like the odbc driver should be mapping to that.

I can't help you with finding those settings, since I have only ever used the "official" PG driver on Windows.

--
  Richard Huxton
  Archonet Ltd

---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
     joining column's datatypes do not match

Reply via email to