Hi Team,

We are facing performance issue on ignite ODBC API, please find the details
as follows,

Our table contains for example 10 thousands row with 7 column( it contains
string ,double and int).

The while loops runs according to the row size in our example it is 10
thousand time.
Please let us know, the efficient way of fetching the value from the
Database using ODBC API.

We have used the code which is same as in the example ignite ODBC.

Please find the below code and provide your valuable suggestion.

                    SQLHSTMT stmt;
                    // Allocate a statement handle
                    SQLAllocHandle(SQL_HANDLE_STMT, dbc, &stmt);
                    std::vector<SQLCHAR> buf(strQuery.begin(),
strQuery.end());
                    ret = SQLExecDirect(stmt, &buf[0],
static_cast<SQLSMALLINT>(buf.size()));
                    if (SQL_SUCCEEDED(ret))
                    {
                                          SQLSMALLINT columnsCnt = 0;
                                          SQLNumResultCols(stmt, &columnsCnt);
                                          std::vector<OdbcStringBuffer> 
columns(columnsCnt);             
                                         for (SQLSMALLINT i = 0; i < 
columnsCnt; ++i)
                                                          SQLBindCol(stmt, i+1, 
SQL_CHAR, columns[i].buffer,      
ODBC_BUFFER_SIZE, &columns[i].reallen);
                                                while (true)
                                                {
                                                        SQLRETURN ret = 
SQLFetch(stmt);
                                                        if (!SQL_SUCCEEDED(ret))
                                                        break;
                                                }
                                    }


Thanks,
Agneeswaran



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Performance-issue-on-Ignite-ODBC-API-tp5908.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to