On Nov 24, 2007 2:32 AM, Jon Westcot [EMAIL PROTECTED] wrote:
Hi all:
For those who've been following the saga, I'm working on an application
that needs to load
a data file consisting of approximately 29,000 to 35,000 records in it (and
not short ones,
either) into several tables.
Eventually, I wind up with a query similar to:
UPDATE table_01 SET field_a = 'New value here', updated=CURDATE() WHERE
primary_key=12345
Even though you've solved it one way to work out the problem here would
be to change it to a select query (unfortunately mysql can't explain
-Original Message-
From: Jon Westcot [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 24, 2007 4:32 AM
To: PHP General
Subject: [PHP] Performance question for table updating
Hi all:
For those who've been following the saga, I'm working on an
application that needs to load
Hi Rob, et al.:
- Original Message -
From: Andrés Robinet [EMAIL PROTECTED]
-Original Message-
From: Jon Westcot [mailto:[EMAIL PROTECTED]
:: gigantic snip here::
So, long story short (oops -- too late!), what's the concensus
among the learned assembly here? Is
Could there be some performance gain by uploading the data to another table and
then update / insert via sql?
bastien
From: [EMAIL PROTECTED]
To: php-general@lists.php.net
Date: Sat, 24 Nov 2007 04:03:53 -0700
Subject: Re: [PHP] Performance question
On Sat, 2007-11-24 at 04:03 -0700, Jon Westcot wrote:
Moral of the story? Two, really. First, ensure you always reference
values in the way most appropriate for their type. Second, don't make your
idiocy public by asking stupid questions on a public forum. g What's the
quote
I think this might interest you:
http://www.bluerwhite.org/btree/
then again it may make your ears bleed (because of the Maths :-).
Mathieu Dumoulin wrote:
This is more a How would you do it than a How can i do it question.
Didn't have time to try it, but i want to know how mysql_seek_row
B trees or binary trees or hash tables or wathever sort algo or memory
organisation could be just great if i'd put all my data in the page and
tried or needed to sort it, but i can't do that and don't really need to.
I'm actually searching for a way to load a ton of data from mysql but
This is more a How would you do it than a How can i do it question.
Didn't have time to try it, but i want to know how mysql_seek_row
acts with large result sets.
For example im thinking of building a node tree application that can
have dual direction links to nodes attached to different
At 12:02 PM 2/1/2006, Mathieu Dumoulin wrote:
This is more a How would you do it than a How can i do it question.
Didn't have time to try it, but i want to know how mysql_seek_row acts
with large result sets.
For example im thinking of building a node tree application that can have
dual
Miles Thompson wrote:
At 12:02 PM 2/1/2006, Mathieu Dumoulin wrote:
This is more a How would you do it than a How can i do it question.
Didn't have time to try it, but i want to know how mysql_seek_row acts
with large result sets.
For example im thinking of building a node tree application
Hi,
Any time you fetch results from a database it take up memory, you can't
do much about that (you can limit the effect by using 'limit' in
conjunction with paging and only getting the columns you need etc but
that's about it).
If you're using a standard id/parentid type approach you're
on 25/06/03 12:19 PM, Ow Mun Heng ([EMAIL PROTECTED]) wrote:
Can someone help explain how I can perform a benchmark on the queries or
whatever?
Write some code, run it many many times, time it with something like Example
1 on http://au.php.net/microtime, then write alternate code, run it many
PROTECTED]
Subject: RE: [PHP] Performance question
It depends on your HW / Application / number of visitors.
If you are planning to have many visitors and plan this number to
constantly
grow than you should avoid going to the DB as much as possible.
Sincerely
berber
Visit http
On Jun 25, 2003, Ow Mun Heng claimed that:
|Can someone help explain how I can perform a benchmark on the queries or
|whatever?
|
|
|Cheers,
|Mun Heng, Ow
|H/M Engineering
|Western Digital M'sia
|DID : 03-7870 5168
|
|
Do it many times and time it.
--
Registered Linux user #304026.
lynx -source
I have a question regarding retrieving the
information. I have the functionlity in which on every
user click, system needs to retrieve information for
particular user and display the page according to the
retrieved information. Now question is which is the
scalable solution? (1) Retrieve
It depends on your HW / Application / number of visitors.
If you are planning to have many visitors and plan this number to
constantly
grow than you should avoid going to the DB as much as possible.
Sincerely
berber
Visit http://www.weberdev.com/ Today!!!
To see where PHP might take you
I think that file_get_contents() is quicker, since include() runs what it
gets as normal php code. And that gives you the answer to the other question
as well. :)
Niklas
-Original Message-
From: Patrick Teague [mailto:[EMAIL PROTECTED]
Sent: 4. maaliskuuta 2003 10:57
To: [EMAIL
I think you've got the best set up already. I have a PDF library that I do a
similar thing with. To update the site I just dump the new PDF's into their
directory and the users reload the page to see the new content.
James
-Original Message-
From: Fifield, Mike [mailto:[EMAIL
Hi!
I have a page that uses server-side includes to display different
features. Something like this:
html
!--#include virtual=feature1.php-- (a few db calls)
!--#include virtual=feature2.cgi-- (perl-script with db calls)
!--#include virtual=feature3.php-- (even more db calls)
/html
"Matthew Mundy" [EMAIL PROTECTED] wrote:
I was wondering. What kind of performance reduction is there in including
files or using the auto prepended one for a file less than, say, 10 line?
Specifically, I would think that the file IO would be a detriment to such
a small file. Without the
21 matches
Mail list logo