Re: mySQL 3.23.32 memory optimization ....

2001-02-02 Thread Marcello Giovagnoli

 
 Do you mean that mysql_store_result() should store everything to a
 local file and and mysql_fetch... should read from this file?
 This should not be that hard to do.

 
Yes, you can use the actual SQL_CURSOR_FORWARD_ONLY that is 
unused at this moment, to call mysql_store_result_on_file().

This new funcion open a temporary file and perform 
the same operation of mysql_store_result() but it uses
the temporary file instead of the memory.

It sets a flag to tell to the native mysql function (I don't 
remember its name now) called in SQLFetchRow()
to read from the temporary file instead of memory.

This is easy to do, (for you of course) but it could 
be a trick waiting the cursors that, If I don't mind,
are planned in mySQL 4.

One can activate or deactivate this capability using
ODBC or SET SQL_PARAMETER=0|1

Do you think that you can insert this feature in a 
3.23.33 releases without modify yours planned jobs 
and, if yes, please let me know when you can do it.

Regards
marcello

P.S.

Do you remember my friend Danilo Montesi (University of Milan),
he has sent to you an abstract to the Triggers implementation
one or two month ago ?

I would like to know your opinion on his job, and if you 
consider this approach convenient.




-- 
Marcello Giovagnoli
R  D
Logical System S.r.l.
23, Via R. Scotellaro
I-60035 Jesi (AN) -ITALY
Tel/Fax: +39 731 200374/200378
mailto:[EMAIL PROTECTED]
http://www.logicalsystem.it

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




mySQL 3.23.32 memory optimization ....

2001-02-01 Thread Marcello Giovagnoli


Hi,

firstly thanks for your answer, [EMAIL PROTECTED] works very very well !

We are doing a very hard work with mySQL , used as main database inside
our ERP Oasi. (instead of DB2 or ORACLE).

We have a performance problem with memory usage.

the process accesses to database via ODBC (iodbc2-12+myodbc2.50.22) and all
seems to be OK, except for the memory usage.

We have a table with 1,5 millions of record that a client program needs to
scan and the client process size grows up because it reserves memory to store
rows.

do_query(), inside execute.c calls mysql_store_result(), and in this way all
the data is transefferd to the client.

I tryed to uncomment the portion of the code above in execute.c having:

.
 if (stmt_options.cursor_type ==
 SQL_CURSOR_FORWARD_ONLY)
stmt-result=mysql_use_result(stmt-dbc-mysql);
  else
 stmt-result=mysql_store_result(stmt-dbc-mysql);
.


and I have set  the correct option before the SQLPrepare() statement:

SQLSetStmtOption(vst-startHstmt,SQL_CURSOR_TYPE,SQL_CURSOR_FORWARD_ONLY);

I don't need the scrolled cursor, so in this way the driver calls
mysql_use_result saving a lot of memory.

It works, but when I execute another query, building a different statement,
with or without setting the statement option for the cursor type, (note that
the code perform another SQLAllocStatement()), i got this error:

---
SQL error  : -1
line number: 107
SQL state  : S1000
native error code  : 2014
[TCX][MyODBC]Commands out of sync;  You can't run this command now

the query is:

   [select * from archcont where codcon = ?]

archcont is another table, a little one.

I know that the comment in execute.c says:
/* We can't use USE_RESULT because SQLRowCount will fail in this case! */

but i don't need to use SQLRowCount  when i set  the cursor type as
SQL_CURSOR_FORWARD_ONLY, of course.

Anyway, if there is another method to save memory, please let me know how
I can do it, without change my code (if it is possible).

Regards

marcello

marcello giovagnoli
Logical System s.r.l.
Via Rocco Scotellaro 23
60035 JESI AN
ITALY
e-mail: [EMAIL PROTECTED]

---

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: mySQL 3.23.32 memory optimization ....

2001-02-01 Thread Marcello Giovagnoli

Hi Monty,

 Why do you need to retrieve so many rows to the client?  Can't you use
 LIMIT to restrict the number of result rows?

The point is that the program scans the entire table and process each row
in a diffrent way, executing other queryes inside the main SQLFetch() loop.
Someting like this:

SQLExecute(hstmt1,query1)
while(SQLFetchRow(hstmt1) == SQL_SUCCESS )
{
  ...
  ...
  SQLExecute(hstmt2,query2)
   while(SQLFetchRow(hstmt2) == SQL_SUCCESS )
  {
..
  }
}.

Perhaps the code should be rewritten, to reflect the right way
to write code that uses database, but in my source code i
don't use ODBC directly, it is embedded, I have someting like this:

 dbSetKey(Db,"table1","column1,column2");
 .
 dbSetValue("table1.column1", value);
 ...
 err=dbStart(Db,"table1")
 while(!err)
 { 
.
.
err=dbStart(Db,"table2")
while(!err)
   { 
   
  dbNext(Db,"table2");
   }
dbNext(Db,"table1");
 }
dbStart() build the right query with the where condition and order by
using ODBC.

For each running program I have one instance of mysqld connected to.

There are up to 300 programs written in this way and they
works, I can't modify them in a few time.


 Are you sure you close the statement before doing another ?
 The reason this happens is that mysql_free_result() is not called
 for the previous query and there is still on the connections data
 for the previous query.

 Executing mysql_free_result() will quickly discard all rows from the
 server so that you can issue a new command against it.


Reading the manual :( It seems that i can do this ONLY using
mysql_store_result(), because each mysqld instance can do
only one query at a time, is it correct ?

If yes, I need some way to reduce the client memory usage, 
processing big tables.

I can implement something build a select with limit n1,2n
re-run the query when SQLFetchRow() fails.

I don't have notice how the other databases works in this cases.

In my opinion the right way to work around this problem should be 
to use a temporary file to store results for the big tables, forcing this
option with a parameter as SQL_BIG_TABLES=1, in this case
mysql_store_result() can store the result in a file, i can issue the
SQLFreeResult() and build another query.

What do you think about ?


 Which version of MyODBC do you use?

myodbc-2.50.22

 If you have an very old MyODBC versions which includes an old
 mysqlclient lib this may explain your problems; Older
 mysql_free_result() versions didn't read the old result from the
 connections and because of this one would get the error message you
 got if one didn't read all rows from the result before calling
 mysql_query() again.


Thanks, Monty

marcello









-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




To thread or not to thread this is the question ..........

2001-01-31 Thread Marcello Giovagnoli

Hi,

I have a problem with mysql 3.23.30 (binary distribution) and Slackware 7.

Runnining different programs that access to database,  the numbers of mysqld 
grow up.

But when a single program exit, the number of mysqld processes should 
decrease by one.

This works on RedHat 7.0 with the same mySQL distribution, but on Slackware 
7 the number of mysqld never goes down, and they keep the maximum memory
used in the last big query.

So I tried to compile the last source distribution 3.23.32 on Slackware 7, 
but I got the same thing.

I know that the mysqld should be threads not processes, and the memory used
should be shared with all running mysqld, but something does not work 
properly.

Do you think that the version of libpthread-0.8.so on Slackware 7 could be 
the cause ? (RedHat 7.0 has the libpthread-0.9.so).

I cannot upgrade the OS now, and I think that is not a nice idea moving 
on the fly libpthread-0.9.so from RedHat to Slackware (the system is up and 
there are 102 users connected !!!).

Could someone help me ?

Thanks in advance.

marcello
-
marcello giovagnoli
Logical System s.r.l.
Via Rocco Scotellaro 23
60035 JESI AN
ITALY
e-mail: [EMAIL PROTECTED]
-

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Slackware AND mySQL 3.23.32 memory problems ....

2001-01-31 Thread Marcello Giovagnoli

Hi,

I'm sorry, for my previus question "To thread or not to thread  " 
but now all it's OK. :)

Looking at /etc/my.cnf, i could see two variables:

set-variable= thread_cache=8
set-variable= thread_concurrency=8  # Try number of CPU's*2

this should set the edge to the maximum threads up to 8 and the mysqld
running daemons are, at least, 8 !(is'nt it ?)

And the process size, or thread size, decrease when the program goes down 
with 3.23.32.

Regards

marcello


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php