- Original Message -
From: Tino Wildenhain [EMAIL PROTECTED]
To: Ben-Nes Yonatan [EMAIL PROTECTED]
Cc: Martijn van Oosterhout kleptog@svana.org;
pgsql-general@postgresql.org
Sent: Sunday, October 02, 2005 4:26 PM
Subject: Re: [GENERAL] Broken pipe
Am Sonntag, den 02.10.2005, 16:39
the max_execution_time
value at php.ini file?
--Nirmalya
--- Ben-Nes Yonatan [EMAIL PROTECTED] wrote:
Hi all,
I sent the following email to the php mailing list also but maybe
this
is a more appropriate mailing list.
I wrote a php script which is running very long queries (hours) on
a
database.
I seem
to the file
(broken pipe).
Am I correct at my assumption? if so how can I set the PHP to wait how
much I tell him?
Ofcourse if im wrong I would like to know the reason also :)
Thanks in advance,
Ben-Nes Yonatan
---(end of broadcast)---
TIP 4
no connection to the file
(broken pipe).
Am I correct at my assumption? if so how can I set the PHP to wait how
much I tell him?
Ofcourse if im wrong I would like to know the reason also :)
Thanks in advance,
Ben-Nes Yonatan
---(end of broadcast)---
TIP 5
Everyone! (Happy new year in hebrew :))
Ben-Nes Yonatan
---(end of broadcast)---
TIP 6: explain analyze is your friend
Martijn van Oosterhout wrote:
On Sun, Oct 02, 2005 at 12:07:18PM +0200, Ben-Nes Yonatan wrote:
I wrote a php script which is running very long queries (hours) on a
database.
I seem to have a problem to run the code when there are single queries
which take long times (like 5 hours
Alvaro Herrera wrote:
On Tue, Aug 30, 2005 at 10:39:57PM -0500, Bruno Wolff III wrote:
On Wed, Aug 31, 2005 at 01:27:30 +0200,
Ben-Nes Yonatan [EMAIL PROTECTED] wrote:
Now again im probably just paranoid but when I'm starting a transaction
and in it im making more then 4 billions diffrent
Martijn van Oosterhout wrote:
On Wed, Aug 31, 2005 at 09:19:05AM +0200, Ben-Nes Yonatan wrote:
If the subtransaction writes at least a tuple, it counts as another
transaction. Else it doesn't count.
Oh crap I fear that now im in serious troubles
Where can I read about this limitation
Bohdan Linda wrote:
On Tue, Aug 30, 2005 at 06:07:24PM +0200, Michael Fuhr wrote:
tables, and a VACUUM might start or complete immediately after you
issue the query but before you read the results). This method is
therefore unreliable.
I intend to do the VACUUM FULL during quiet hours,
!
Ben-Nes Yonatan
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Tom Lane wrote:
Ben-Nes Yonatan [EMAIL PROTECTED] writes:
Indexes:
items_items_id_key UNIQUE, btree (items_id)
items_left btree (left)
items_left_right btree (left, right)
You could get rid of the items_left index --- it's redundant with the
first column of the combined index
as only one transaction right? (should I duck to avoid the manual? ;))
As always thanks alot!
Ben-Nes Yonatan
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
in a hurry to finish this project so
any help or a piece of clue will be welcomed gladly!
Thanks alot in advance (even only for reading what I wrote :P),
Ben-Nes Yonatan
Canaan Surfing ltd.
http://www.canaan.net.il
---(end of broadcast)---
TIP 3: Have you
Bruno Wolff III wrote:
On Sat, Aug 27, 2005 at 18:19:54 +0530,
sunil arora [EMAIL PROTECTED] wrote:
Bruno,
thanks for the reply,
we did run vaccum on it.. and we do it regulary to maintain its
performance but its not giving the expected results.
Did you do VACUUM FULL or just plain
Jim C. Nasby wrote:
On Tue, Aug 23, 2005 at 12:27:39AM +0200, Ben-Nes Yonatan wrote:
Jim C. Nasby wrote:
Emptying the cache will not show real-life results. You are always going
to have some stuff cached, even if you get a query for something new. In
this case (since you'll obviously
Dann Corbit wrote:
-Original Message-
From: Ben-Nes Yonatan [mailto:[EMAIL PROTECTED]
Sent: Monday, August 22, 2005 3:28 PM
To: Jim C. Nasby; Sean Davis; Dann Corbit
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Query results caching?
On Mon, Aug 22, 2005 at 10:13:49PM
to delete that caching after every
query test that I run, cause I want to see the real time results for my
queries (its for a searching option for users so it will vary alot).
Is it possible to do it manually each time or maybe only from the
configuration?
Thanks in advance,
Ben-Nes Yonatan
Canaan
Sean Davis wrote:
On 8/22/05 1:59 PM, Dann Corbit [EMAIL PROTECTED] wrote:
-Original Message-
From: [EMAIL PROTECTED] [mailto:pgsql-general-
[EMAIL PROTECTED] On Behalf Of Ben-Nes Yonatan
Sent: Monday, August 22, 2005 9:03 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Query
am 22.08.2005, um 22:13:49 +0200 mailte Ben-Nes Yonatan folgendes:
I think that I was misunderstood, Ill make an example:
Okay:
Lets say that im making the following query for the first time on the
motorcycles table which got an index on the manufacturer field:
EXPLAIN ANALYZE SELECT
On Mon, Aug 22, 2005 at 10:13:49PM +0200, Ben-Nes Yonatan wrote:
I think that I was misunderstood, Ill make an example:
Lets say that im making the following query for the first time on the
motorcycles table which got an index on the manufacturer field:
EXPLAIN ANALYZE SELECT manufacturer
that I tried) it
becomes extremly slow, what can I do to solve this problem?
Thanks in advance,
Ben-Nes Yonatan
Canaan Surfing ltd.
http://www.canaan.net.il
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire
) that its making the process of deleting
the content way too slow and I need to do it each day am I correct
with what im doing?
Thanks again,
Yonatan
Richard Huxton wrote:
Ben-Nes Yonatan wrote:
If ill query: DELETE FROM table1; it will just get stuck...
If ill try: DELETE FROM table1
was able to delete the
current row which stucked the process but then I got stuck at some other
row at the table
Thanks in advance,
Ben-Nes Yonatan
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire
Richard Huxton wrote:
Ben-Nes Yonatan wrote:
Richard Huxton wrote:
Can anyone tell me if Pl/PgSQL can support a multi dimensional array
(of up to 5 levels top I guess) with about 100,000 values?
and does it stress the system too much?
I can't imagine it being wonderful - you probably
Hi all,
Can anyone tell me if Pl/PgSQL can support a multi dimensional array (of
up to 5 levels top I guess) with about 100,000 values?
and does it stress the system too much?
Thanks!
Ben-Nes Yonatan
Canaan Surfing ltd.
---(end of broadcast
Richard Huxton wrote:
Ben-Nes Yonatan wrote:
Hi all,
Can anyone tell me if Pl/PgSQL can support a multi dimensional array
(of up to 5 levels top I guess) with about 100,000 values?
and does it stress the system too much?
I can't imagine it being wonderful - you probably want a different
be any period of time without data at the Main table).
I guess that what i really want to know is how much all of this process
will stress the server... and what can i do to let the server work on it
in a way that it wont disturb the rest of the processes.
Thanks alot again,
Ben-Nes Yonatan
process i suspect.
F. INSERT the data from Temp table2 to the Main table.
G. End transaction + quit from PL/pgSQL. }
3. Delete all the files.
Thanks alot in advance and again im sorry for the length of the mail :)
Ben-Nes Yonatan
Canaan Surfing ltd.
http://www.canaan.net.il
On Feb 4, 2005, at 8:34 AM, Ben-Nes Yonatan wrote:
On Fri, Feb 04, 2005 at 09:27:08AM +0200, Ben-Nes Yonatan wrote:
Hi all,
Does anyone know if PostgreSQL got a function which work like
load_file() of mySQL?
I am not quite sure what load_file() does, but check the COPY command
On Fri, Feb 04, 2005 at 09:27:08AM +0200, Ben-Nes Yonatan wrote:
Hi all,
Does anyone know if PostgreSQL got a function which work like
load_file() of mySQL?
I am not quite sure what load_file() does, but check the COPY command
and the analgous \copy in psql. As with many other PostgreSQL
che 1.3.26
Thanks in advance,
Ben-Nes Yonatan
in advance
Ben-Nes Yonatan
in advance
Ben-Nes Yonatan
33 matches
Mail list logo