Hello,

In my application, I am connecting to various RBDMS
but primarily SQL Server and MySQL using ODBC 3.0
driver. In SQL Server, if I execute SQLCollAtrribute
with SQL_DESC_OCTET_LENGTH, it is returning me the
maximum number of bytes that the column can have
irrespective of whether the column in the resultset
has that much data or not.

Whereas, the MySQL ODBC driver is returning the
maximum data that a column has for that particular
resultset.

Why is the anomaly? Does this information differ from
driver to driver.

Is there any way to know the size of a perticular row
in raw-bytes before doing an SQLFetch(). I would like
to allocate the memory before I do SQLFetch(). Looking
into Google Groups it seems that many people have the
same problem but not much solution has been provided.

What is the best way to know how much bytes the RDBMS
takes in the disk to store that much information?


        
                
__________________________________
Do you Yahoo!?
Friends.  Fun.  Try the all-new Yahoo! Messenger.
http://messenger.yahoo.com/ 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to