Hi, 
The only thing that could slow you down is that the genAttrib array will take 
more and more memory as the result set grows. I would recommend you to create 
a function that uses the mysql row directly instead of creating this huge 
array.

something like

while ((row = mysql_num_rows(result))){
 usedata(row);
}

Of course it depends on what do you need the mysql data for - but if you can 
make it to use one row at a time it should run a lot more faster.

-- 
Dobromir Velev
[EMAIL PROTECTED]
http://www.websitepulse.com/


On Tuesday 29 June 2004 08:50, Matt Eaton wrote:
> Hi all,
>
> I was hoping this was the right place for a question about the C API.
> I've been grabbing result sets from tables in the C API for a few years
> now, but I'm starting to work with result sets that are big enough to
> bog me down.  Of course, the result sets aren't insanely big, so I was
> wondering why it was taking so long for me to suck them in to C,
> especially when I can run the same query from the command line using the
> binaries and they can cache it to a file on the hard disk pretty much
> instantly.  So, basically, I was just hoping that I've been doing
> something wrong, or at least that there was something I could do better,
> to make my database communication as fast as the mysql command line
> tools.  I've checked out their source and nothing obvious jumps out at
> me.  Here's a non-functional sample of my code:
>
> int main(int argc, char *argv[] ) {
>       int uid;
>       int sid;
>       char sqlBuff[4000];
>       int err = 0;
>       int i;
>       // Setup the database communications space:
>       MYSQL dbase;
>       MYSQL_RES *result;
>       MYSQL_ROW row;
>
>       float **genAttrib;
>
>       //... snip ...
>
>
>       // Connect to the database:
>       if (mysql_init(&dbase) == NULL) err = 1;
>       else {
>
>
> if(mysql_real_connect(&dbase,"localhost","login","pass","test",0,NULL,CL
> IENT_FOUND_ROWS) == NULL) {
>                       err = 1;
>                       fprintf(stderr, "Failed to connect to database:
> Error: %s\n",
>                               mysql_error(&dbase));
>               }
>       }
>
>       // If the connection couldn't be established:
>       if(err) {
>               printf("db connection failed!\n");
>               exit(1);
>       }
>
>
>       //... snip ...
>
>       // This query could have as many as a million rows returned, but
> the query itself runs quite fast.  It seems to just be
>       // sucking it into C that can take up to four seconds on our
> dual Xeon server.
>       sprintf(sqlBuff,"SELECT A.* FROM `attribs` as A, login AS L
> WHERE A.guid=L.guid AND L.isActive=1 AND L.sid=%d AND
> A.guid!=%d",sid,uid);
>       if (mysql_real_query(&dbase,sqlBuff,strlen(sqlBuff))) {
>               printf("Pool Attributes Select Failed... dumbass\n");
>               fprintf(stderr, "Error: %s\n",
>                               mysql_error(&dbase));
>               exit(1);
>       }
>
>       result = mysql_store_result(&dbase);
>       numRows=mysql_num_rows(result);
>       for (i=0;i<numRows;i++) {
>               row = mysql_fetch_row(result);
>               tempq=atoi(row[1]);
>               tempP=atoi(row[0]);
>               genAttrib[tempP][tempq]=atoi(row[2]);
>       }
>
> return 0;
> }
>
> So, if someone sees something that I could change to speed things up, or
> I should direct this question elsewhere... thanks for your help and
> thanks for reading this far!
>
> Thanks again,
> Matt



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to