On Sat, Jun 7, 2008 at 5:16 PM, Viktor Rosenfeld
<[EMAIL PROTECTED]> wrote:
> Hi Scott,
>
> Am 07.06.2008 um 16:53 schrieb Scott Marlowe:
>>>
>>> I'm experimenting with different indexes to speed up my queries and I was
>>> wondering if it is possible to temporarily deactivate an index, so it
>>> w
Hi Scott,
Am 07.06.2008 um 16:53 schrieb Scott Marlowe:
I'm experimenting with different indexes to speed up my queries and
I was
wondering if it is possible to temporarily deactivate an index, so
it won't
be considered in the evaluation of query plans. The reason is that
dropping
and then
Here is a cast function I wrote some years ago to convert a couple of
money columns to numeric
http://zigo.org/postgresql/#cast_money_to_numeric
You already have a solution, but maybe it is of value to someone else.
/Dennis
Ken Winter skrev:
Right you are, Tom!
In case anyone else is fa
Right you are, Tom!
In case anyone else is facing the same migration, pasted in below is a
pl/pgsql function that does the conversion.
~ Thanks to all
~ Ken
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:pgsql-general-
> [EMAIL PROTECTED] On Behalf Of Tom Lane
> Sent: Saturday,
Thanks, but the join clause is there, it's just buried in the subqueries.
If there is a problem, it is probably that the loop never ends.
Or it could be that the answer is exponential, and I just have too many
rows in the source table and too deep a graph.
I figured out how to do it in the ap
On Fri, Jun 6, 2008 at 3:46 PM, Ethan Collins
<[EMAIL PROTECTED]> wrote:
> I have got a table that holds 2 columns: date_id and info. date_id is
> bigint, created as, for eg. 17 May 2008, 19:05 hrs => 200805171905. This
> table is populated with {date, info} pair that I receive frequently. AND
> th
Celso Pinto wrote:
> So my questions are: is this at all possible? If so, is is possible to
> increate that maximum size?
Indexing the arrays themselves is probably pretty useless. Try indexing
the elements, which you can do with the intarray contrib module.
--
Alvaro Herrera
Tino Wildenhain <[EMAIL PROTECTED]> writes:
>> to_char() and back to numeric shouldn't be a problem within the database
>> and we have regex too if anything fails. I don't think you need to
>> dump and edit the dump to achive that.
> Ah sorry forget that... neither to_char nor cast to text works.
askel <[EMAIL PROTECTED]> writes:
> Performance is at least few times better when EXISTS is used.
It really shouldn't be. PG knows more possible plans for IN than
EXISTS, so IN should pretty much always be equal or better ... unless
the planner is making the wrong choice. I speculate that you ha
On Sat, Jun 7, 2008 at 8:05 AM, Viktor Rosenfeld
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm experimenting with different indexes to speed up my queries and I was
> wondering if it is possible to temporarily deactivate an index, so it won't
> be considered in the evaluation of query plans. The reason
Gregory Stark wrote:
REINDEX scans the table
precisely once and sorts it.
For the bloat, as opposed to corruption, case -
what information is needed from the table that
is not in the old index? Why would a sequential
read of the old index alone (then some processing)
not suffice?
Thanks,
J
Hi,
I'm experimenting with different indexes to speed up my queries and I
was wondering if it is possible to temporarily deactivate an index, so
it won't be considered in the evaluation of query plans. The reason
is that dropping and then rebuilding an index takes up time which I
would r
> -Original Message-
>
> Hi all,
>
> I have been using IN clause almost exclusively until recently I tried
> to use EXISTS and gained significant performance increase without
> changing/creating any indexes:
>
> SELECT ... FROM a WHERE a.ref IN (SELECT b.id WHERE ...)
>
> vs
>
> SELECT
> -Original Message-
> The small table is a listing of county fips codes, their name, and the
> geometry for the county. Each fips is only listed once. The big table
> is multiple emissions for each county, the parameter for the emission,
> and the source code for the emission (scc). Each
Hi,
Tino Wildenhain wrote:
Hi,
Shane Ambler wrote:
...
I think the steps Joshua is referring to are -
1. pg_dump -t mytable_with_money mydb > mytable_backup.sql
2. edit table definition in backup file to use numeric
3. remove $ and , from money column data
4. DROP TABLE mytable_with_money
5.
Bill, thanks, that made me think a while.
But doesn't that just push the problem up the chain a notch?
Adding a table id makes the row unique in the database but not beyond.
Then you add a database id to the table/row id and you are unique across
databases?
But can you be universally unique wit
Hi,
Shane Ambler wrote:
...
I think the steps Joshua is referring to are -
1. pg_dump -t mytable_with_money mydb > mytable_backup.sql
2. edit table definition in backup file to use numeric
3. remove $ and , from money column data
4. DROP TABLE mytable_with_money
5. psql < mytable_backup.sql
Wh
On Jun 7, 2008, at 2:58 AM, Ralph Smith wrote:
CODE:
===
CREATE OR REPLACE FUNCTION find_next_delim(invar varchar, delimlist
varchar) RETURNS integer AS
$$
/* OVERLOADED Function. The other version takes a 3rd parameter
as the
starting positi
Hi all,
I have been using IN clause almost exclusively until recently I tried
to use EXISTS and gained significant performance increase without
changing/creating any indexes:
SELECT ... FROM a WHERE a.ref IN (SELECT b.id WHERE ...)
vs
SELECT ... FROM a WHERE EXISTS (SELECT 1 FROM b WHERE a.ref=
I have got a table that holds 2 columns: date_id and info. date_id is
bigint, created as, for eg. 17 May 2008, 19:05 hrs => 200805171905. This
table is populated with {date, info} pair that I receive frequently. AND
there can be mltiple such entries with a single date_id but varying info,
for eg. 2
I have got a table that holds 2 columns: date_id and info. date_id is
bigint, created as, for eg. 17 May 2008, 19:05 hrs => 200805171905. This
table is populated with {date, info} pair that I receive frequently. AND
there can be mltiple such entries with a single date_id but varying info,
for eg. 2
Hi guys, maybe this is just a teaser for me, but any help would be
awesome. My best crack at the solution is not returning yet after a
good ten minutes. I'll post the explain analyze if it ever comes
back. I have no indexing, which is probably embarrassing, I'm just
not quite sure what to index
22 matches
Mail list logo