At 23:06 -0300 11/4/05, Fabricio Mota wrote:
OK, Shawn, nice tip.
I really didn't know if it was actually performed by server or by client.
But I'll study the MySQL client protocols.
But it is still strange and needs investigation, because as I remember, I've
submit a prove fire to the oracle server. The prove was:
I've sent a really really really heavy query, containing several tables
cartesian joins, resulting in too many data (billions of billions of
records), and we've imaginate it could spend at least many hours. For my
surprising, the server reponse was imediate, with a only first page.
Do you think that the server continues the actual processing until the end,
despite no more pages being requested anymore?
With respect to the original question, you might want to consider using
a prepared statement and a server-side cursor. You can set an attribute
of the cursor that indicates how many rows at a time a fetch operation
should grab from the server.
http://dev.mysql.com/doc/refman/5.0/en/mysql-stmt-attr-set.html
My Regards
FM
-Mensagem original-
De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Enviada em: sexta-feira, 4 de novembro de 2005 02:06
Para: Fabricio Mota
Cc: mysql@lists.mysql.com
Assunto: Re: RES: Delivery by Demand
Yes, it is a client-side behavior to the extent that the MySQL server does
not page through data. It gets the complete results unless you ask for a
LIMIT, then it stops building results after it meets the criteria of your
LIMIT.
I really do not know much about Oracle administration and communication
protocols so I am just guessing.I believe that even your Oracle clients
had to ask for data in pages instead of the full set. Are you sure your
Oracle server was really holding those results for you and only delivering
batches of 100 records? That seems very much like a client-side behavior
that was just hard for you to notice. It could have been designed as part of
the client library As I said, I just don't know but I know others on the
list have had some extensive Oracle experience. Maybe one of them can weigh
in on this
With MySQL, the behavior you want to emulate is definitely something you
control from the client-end either by using the LIMIT clause or by pulling
down single rows in batches. You have to remember, though, that while the
client is processing it's latest batch of rows that the server still holds
onto a complete result set and has to maintain an open connection to your
client. It's really in the best interest of performance for your client to
spend as little time getting the data from the server. That means that you
should only write queries that ask for the data you actually need and you
should get the data out of the server as soon as possible. That way the
server has more resources available to deal with other queries.
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine
Fabricio Mota [EMAIL PROTECTED] wrote on 11/03/2005 10:52:34 PM:
Shawn,
So are you telling me that it's a configuration in Client, but not in
Server?
FM
-Mensagem original-
De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Enviada em: quarta-feira, 2 de novembro de 2005 17:37
Para: Fabricio Mota
Cc: mysql@lists.mysql.com
Assunto: Re: Delivery by Demand
Fabricio Mota [EMAIL PROTECTED] wrote on 11/02/2005 10:23:46
AM:
Hi all,
In the past, I worked as a Oracle user. I've noted that in oracle
(or
maybe
in that configuration), when we request a great amount of data, such
like:
select * from million_records_table
It does not delivers the entire table at the first moment. It
delivers a
little amount of data - such like a single page containing about 100
records - and awaits the cursor request the Record No 101 for fetch
the
next
set of data.
In MySQL - at least, with the default configuration I have used - it
does
not happens. It delivers all the million of records existent in the
table,
inconditionally. I know that there is the clause LIMIT N, to limit
the
first N records existing in the query, but is there a way to warrant
a
delivery by demand, such Oracle does, without to have to alter the
application's SQL code?
Thank you
I know that in the C-API (and others) there are two commands to
retrieve
records from the server. One is mysql_store_result() which will bring
all of
your results into your machine in a single pull.
The second is mysql_use_result(). That command sets up a transfer
process
of pulling the rows from the server one at a time. If you need 100 rows
of
data, you issue 100 mysql_fetch_row() commands. You are responsible for
storing the records for later re-use.
Is that the flexibility you are looking for?
http://dev.mysql.com/doc/refman/5.0/en/mysql-use