Put "" around the column name, eg:
insert into "limit" values (1, 2,3 );
Chris
Gourish Singbal wrote:
Guys,
I am having a problem firing queries on one of the tables which is
having "limit" as the column name.
If a run an insert/select/update command on that table i get the below error.
ERROR: syn
On Mon, 14 Mar 2005 06:14 pm, Gourish Singbal wrote:
> Guys,
>
> I am having a problem firing queries on one of the tables which is
> having "limit" as the column name.
>
> If a run an insert/select/update command on that table i get the below error.
>
> ERROR: syntax error at or near "limit" a
Guys,
I am having a problem firing queries on one of the tables which is
having "limit" as the column name.
If a run an insert/select/update command on that table i get the below error.
ERROR: syntax error at or near "limit" at character 71
Any Help would be realyl great to solve the problem.
Daniel Schuchardt <[EMAIL PROTECTED]> writes:
> i have a query plan who is bad with standard cpu_tuple_costs and good if
> I raise cpu_tuple_costs. Is it is a good practice to raise them if i
> want to force postgres to use indexes more often?
Reducing random_page_cost is usually the best way to
I wrote:
> Since ExecProject operations within a nest of joins are going to be
> dealing entirely with Vars, I wonder if we couldn't speed matters up
> by having a short-circuit case for a projection that is only Vars.
> Essentially it would be a lot like execJunk.c, except able to cope
> with two
Arshavir Grigorian <[EMAIL PROTECTED]> writes:
> Hi,
>
> I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has an
> Ext3 filesystem which is used by Postgres.
People are going to suggest moving to RAID1+0. I'm unconvinced that RAID5
across 14 drivers shouldn't be able to kee
why this query needs more time? Its very slow
thx
//QUERY
select
coalesce(personaldetails.masterid::numeric,personaldetails.id) +
(coalesce(personaldetails.id::numeric,0)/100) as sorting,
floor(coalesce(personaldetails.masterid::numeric,personaldet
""Tambet Matiisen"" <[EMAIL PROTECTED]> writes
> Hi!
>
> In one of our applications we have a database function, which
> recalculates COGS (cost of good sold) for certain period. This involves
> deleting bunch of rows from one table, inserting them again in correct
> order and updating them one-by-
I continue to be stumped by this. You are right that I should have
listed the estimates provided by explain... basically for the select where
bar = 41, it's estimating there will be 40,000 rows instead of 7, out of
what's actuallly 5 million records in the table.
So far I've tried inc
Hi List,
i have a query plan who is bad with standard cpu_tuple_costs and good if
I raise cpu_tuple_costs. Is it is a good practice to raise them if i
want to force postgres to use indexes more often? Or is it is better to
disable sequence scans?
CIMSOFT=# ANALYSE mitpln;
ANALYZE
CIMSOFT=# EXPL
Hi,
I have a RAID5 array (mdadm) with 14 disks + 1 spare. This partition has
an Ext3 filesystem which is used by Postgres. Currently we are loading a
50G database on this server from a Postgres dump (copy, not insert) and
are experiencing very slow write performance (35 records per second).
Top
Hi Arshavir Grigorian,
0. If possible move to 8.0.1 - bgwriter help you
1. Create RAID1 for redo and place drives on separate
SCSI channel
2. Update postgresql.conf:
shared_buffers = 1-5
work_mem = 10-30
maintenance_work_mem = 10-30
max_fsm_pages = 150
max
I have forgotten this :
CREATE OR REPLACE FUNCTION date_to_yearmonth_dec(TIMESTAMP) RETURNS
VARCHAR AS'
BEGIN
RETURN extract(year FROM $1) || extract(month FROM $1)-1;
END'LANGUAGE plpgsql IMMUTABLE;
---(end of broadcast)---
TIP 9: the planner will
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
>> Is the data confidential? If you'd be willing to send me a pg_dump
>> off-list, I'd like to replicate this test and try to see where the time
>> is going.
>>
> Thank you very much for your offer. The data are partially confidental
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
> There are only JOINs number against number.
Hmph. There's no reason I can see that hash joins should be as slow as
they seem to be in your test.
Is the data confidential? If you'd be willing to send me a pg_dump
off-list, I'd like
Tom Lane wrote:
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
shared_buffers = 48000 # min 16, at least max_connections*2,
8KB each
work_mem = 1024 # min 64, size in KB
maintenance_work_mem = 16384# min 1024, size in KB
max_stack_depth = 2048 # min
=?windows-1250?Q?Miroslav_=8Aulc?= <[EMAIL PROTECTED]> writes:
> I've just tried to uncomment the settings for these parameters with with
> no impact on the query speed.
> shared_buffers = 48000 # min 16, at least max_connections*2,
> 8KB each
> work_mem = 1024 # min 64, size in
John Arbash Meinel wrote:
Is there a reason to use varchar(1) instead of char(1). There probably
is 0 performance difference, I'm just curious.
No, not at all. I'm just not used to char().
Well, with cursors you can also do "FETCH ABSOLUTE 1 FROM
", which sets the cursor position, and then you can
Tom Lane wrote:
John Arbash Meinel <[EMAIL PROTECTED]> writes:
How about a quick side track.
Have you played around with your shared_buffers, maintenance_work_mem,
and work_mem settings?
Indeed. The hash joins seem unreasonably slow considering how little
data they are processing (unless t
Miroslav Šulc wrote:
John Arbash Meinel wrote:
...
Many of the columns are just varchar(1) (because of the migration from
MySQL enum field type) so the record is not so long as it could seem.
These fields are just switches (Y(es) or N(o)). The problem is users
can define their own templates and in
John Arbash Meinel <[EMAIL PROTECTED]> writes:
> How about a quick side track.
> Have you played around with your shared_buffers, maintenance_work_mem,
> and work_mem settings?
Indeed. The hash joins seem unreasonably slow considering how little
data they are processing (unless this is being run
John Arbash Meinel wrote:
It's actually more of a question as to why you are doing left outer
joins, rather than simple joins.
Are the tables not fully populated? If so, why not?
Some records do not consist of full information (they are collected from
different sources which use different approach
John Arbash Meinel wrote:
How about a quick side track.
Have you played around with your shared_buffers, maintenance_work_mem,
and work_mem settings?
I have tried to set shared_buffers to 48000 now but no speedup
(11,098.813 ms third try). The others are still default. I'll see
documentation and
Miroslav Šulc wrote:
Hi John,
thank you for your response.
I will comment on things separately.
John Arbash Meinel wrote:
...
These external tables contain information that are a unique parameter
of the AdDevice (like Position, Region, County, City etc.), in some
containing localized description of
Miroslav Šulc wrote:
Hi John,
thank you for your response.
How about a quick side track.
Have you played around with your shared_buffers, maintenance_work_mem,
and work_mem settings?
What version of postgres are you using? The above names changed in 8.0,
and 8.0 also has some perfomance improvement
Hi Ragnar,
Ragnar Hafstaà wrote:
[snip output of EXPLAIN ANALYZE]
for those of us who have not yet reached the level where one can
infer it from the query plan, how abour showing us the actual
query too ?
I thought it will be sufficient to show me where the main bottleneck is.
And in fact, the
> -Original Message-
> From: Josh Berkus [mailto:[EMAIL PROTECTED]
> Sent: Sunday, March 13, 2005 12:05 AM
> To: Tambet Matiisen
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] One tuple per transaction
>
>
> Tambet,
>
> > In one of our applications we have a database
Hi John,
thank you for your response.
John Arbash Meinel wrote:
You really need to post the original query, so we can see *why* postgres
thinks it needs to run the plan this way.
Here it is:
SELECT AdDevicesSites.IDPK, AdDevicesSites.AdDevicesSiteSizeIDFK,
AdDevicesSites.AdDevicesSiteRegionIDFK,
On Sun, 2005-03-13 at 16:32 +0100, Miroslav Šulc wrote:
> Hi all,
>
> I am new to PostgreSQL and query optimizations. We have recently moved
> our project from MySQL to PostgreSQL and we are having performance
> problem with one of our most often used queries. On MySQL the speed was
> sufficien
Miroslav Šulc wrote:
Hi all,
I am new to PostgreSQL and query optimizations. We have recently moved
our project from MySQL to PostgreSQL and we are having performance
problem with one of our most often used queries. On MySQL the speed
was sufficient but PostgreSQL chooses time expensive query plan.
Hi all,
I am new to PostgreSQL and query optimizations. We have recently moved
our project from MySQL to PostgreSQL and we are having performance
problem with one of our most often used queries. On MySQL the speed was
sufficient but PostgreSQL chooses time expensive query plan. I would
like to
31 matches
Mail list logo