On Jul 4, 2009, at 8:06 AM, Craig Ringer wrote:
On Fri, 2009-07-03 at 15:00 +0100, Vanessa Lopez wrote:
I don't know much about postgre, I have no clue what else I can do.
Please, please any help is very very much appreciated I have lots of
databases and months of work in postgre (also lots of b
On Jul 1, 2009, at 4:20 AM, Gerd König wrote:
b) pg_statio_user_tables=>
heap_blks_read is the number of disk blocks read for that table
(excluding index
access), does heap_blks_hit mean the number of accesses to the
cache for that data ?
...and is the number of heap_blks_read in heap_blks_h
On Jul 1, 2009, at 4:15 AM, Scara Maccai wrote:
I query is using Index scan instead of Bitmap Index Scan because it
"thinks" that the number of rows returned will be low (12). In
fact, the number of rows returned is 120.
This happens because there are some WHERE conditions that check
cross
Scott Marlowe wrote:
> On Thu, Jul 9, 2009 at 7:58 PM, Bruce Momjian wrote:
> > Scott Bailey wrote:
> >> Alvaro Herrera wrote:
> >> > Tim Keitt wrote:
> >> >> I am combining query results that I know are disjoint. I'm wondering
> >> >> how much overhead there is in calling union versus union all. (
On Thu, Jul 9, 2009 at 7:58 PM, Bruce Momjian wrote:
> Scott Bailey wrote:
>> Alvaro Herrera wrote:
>> > Tim Keitt wrote:
>> >> I am combining query results that I know are disjoint. I'm wondering
>> >> how much overhead there is in calling union versus union all. (Just
>> >> curious really; I can'
Tim Uckun wrote:
I don't see any ubuntu packages for 8.4 in the default repositories.
Does anybody know if they will be upgrading the postgresql package to
8.4 or creating a new package for it.
I'd rather use the packages than to compile it myself. If anybody has
an argument as to why I should
Hello,
I just installed pg-8.4 on Windows XP but ran into
some unexpected problems.
I am working on some tools to aid English-speaking
learners of Japanese. This of course requires me
to regularly display and enter Japanese text on my
machine, so I have the Regional setting, "Language
for no
Scott Bailey wrote:
> Alvaro Herrera wrote:
> > Tim Keitt wrote:
> >> I am combining query results that I know are disjoint. I'm wondering
> >> how much overhead there is in calling union versus union all. (Just
> >> curious really; I can't see a reason not to use union all.)
> >
> > UNION needs t
Alvaro Herrera wrote:
Tim Keitt wrote:
I am combining query results that I know are disjoint. I'm wondering
how much overhead there is in calling union versus union all. (Just
curious really; I can't see a reason not to use union all.)
UNION needs to uniquify the output, for which it plasters
2009/7/9 Tim Uckun
> I don't see any ubuntu packages for 8.4 in the default repositories.
>
> Does anybody know if they will be upgrading the postgresql package to
> 8.4 or creating a new package for it.
>
> I'd rather use the packages than to compile it myself. If anybody has
> an argument as to
On Thu, Jul 9, 2009 at 7:29 PM, Greg Stark wrote:
> On Fri, Jul 10, 2009 at 1:28 AM, Scott Marlowe wrote:
>>
>> $750 is about what a decent RAID controller would cost you, but again
>> it's likely that given your bulk import scenario, you're probably ok
>> without one. In this instance, you're pr
On Fri, Jul 10, 2009 at 1:28 AM, Scott Marlowe wrote:
>
> $750 is about what a decent RAID controller would cost you, but again
> it's likely that given your bulk import scenario, you're probably ok
> without one. In this instance, you're probably best off with software
> RAID than a cheap RAID c
I don't see any ubuntu packages for 8.4 in the default repositories.
Does anybody know if they will be upgrading the postgresql package to
8.4 or creating a new package for it.
I'd rather use the packages than to compile it myself. If anybody has
an argument as to why I should compile it I am all
thank you all for your assistance. i tried not to be adventurous tweaking
the configuration of a production system except for system upgrade which
has not failed on me before. burned my fingers once. i need to visit the
documentation and study more the admin side of the system.
thanks al
Greg Stark writes:
> Not really. The OP doesn't say how wide the record rows are but unless
> they're very wide it wouldn't pay to use an index for this even if you
> didn't have to access the heap also. It's going to be faster to scan
> the whole heap and either sort or use a hash. Currently ther
On Thu, Jul 9, 2009 at 4:47 PM, Andres Freund wrote:
> AFAIK the primary cause is that indexes in pg do not store visibility
> information.
Not really. The OP doesn't say how wide the record rows are but unless
they're very wide it wouldn't pay to use an index for this even if you
didn't have to a
On Thu, Jul 9, 2009 at 5:40 PM, wrote:
> Hi -
>
> I'm running Postgres v8.3 on Fedora 10 (home machine - x86_64
> architecture). After realizing that the storage requirements of one of
> my databases will exceed 4Tb, I wanted to see if anyone had any
> suggestions as to hardware setup that works w
Hi -
I'm running Postgres v8.3 on Fedora 10 (home machine - x86_64
architecture). After realizing that the storage requirements of one of
my databases will exceed 4Tb, I wanted to see if anyone had any
suggestions as to hardware setup that works well with Postgres running
on Linux. I have partitio
Niederland writes:
> SQL 2: select ('A' || '\r\n' || 'B') as tt from customer limit 1
> Functions without escaping literal
Really? I get
regression=# select ('A' || '\r\n' || 'B') as tt from customer limit 1;
WARNING: nonstandard use of escape in a string literal
LINE 1: select ('A' || '\r\n'
ja...@aers.ca wrote:
if you used default locations I believe it should be (this is from
memory mind) under c:\program files\postgres\\data\
data is the folder you want.
First, verify the location of pgdata...
sc qc pgsql-8.3
(I'm assuming this is 8.3, modify for other versions)
if you used default locations I believe it should be (this is from
memory mind) under c:\program files\postgres\\data\
data is the folder you want.
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ja...@aers.ca
Sent: Thursday, July 09, 2009 1
do the DB folders still exist? if so back them up, reinstall Postgres
(reinstalling XP probably wiped out either DLL's or registry entries)
and relaunch it. don't have it initialize a DB on install
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of
Alvaro Herrera wrote:
> Tim Keitt wrote:
> > I am combining query results that I know are disjoint. I'm wondering
> > how much overhead there is in calling union versus union all. (Just
> > curious really; I can't see a reason not to use union all.)
>
> UNION needs to uniquify the output, for whic
Tim Keitt wrote:
I am combining query results that I know are disjoint. I'm wondering
how much overhead there is in calling union versus union all. (Just
curious really; I can't see a reason not to use union all.) (cc me
please; not subscribed...)
THK
I think you can test this one yourself p
Tim Keitt wrote:
> I am combining query results that I know are disjoint. I'm wondering
> how much overhead there is in calling union versus union all. (Just
> curious really; I can't see a reason not to use union all.)
UNION needs to uniquify the output, for which it plasters an additional
sort s
I am combining query results that I know are disjoint. I'm wondering
how much overhead there is in calling union versus union all. (Just
curious really; I can't see a reason not to use union all.) (cc me
please; not subscribed...)
THK
--
Timothy H. Keitt
http://www.keittlab.org/
--
Sent via pg
Please help me with this. I really need your advice as to how to retrieve the
data in my postgresql database.
I have postgresql installed in Windows XP platform five months ago. Just
yesterday my OS bugged down and saying NTDLR is missing. What I did was I
re-installed my OS. When I finished m
with Postgres 8.4:
SQL 1: update customer set note = ('A' || '\r\n' || 'B') where 1=0;
generates:
WARNING: nonstandard use of escape in a string literal
LINE 1: update customer set note = ('A' || '\r\n' || 'B') where 1=0;
^
HINT: Use the escape string
How about using an open source ETL tool? You could go with Talend Open
Studio, it is an open source program to do data migration, ETL and data
synchronization.
Check the site: http://www.talend.com/
Scott Mead-3 wrote:
>
> On Tue, Jul 7, 2009 at 11:48 AM, Rstat wrote:
>
>>
>>
>> Hi, Im b
>
> > If I ran a select * from A where date1 >= '2009-07-02' and date1 <
> > '2009-07-15' then I would think it wouldn't check O.
>
[Spotts, Christopher]
I oversimplified this too much - but I figured out what was happening.
If you added the June table as well and added a separate NOT constrain
"Chris Spotts" writes:
> I mistyped, that should be
> alter table D add constraint onlyjuly check (date1 >= '2009-07-01' and date1
> < '2009-08-01')
> Then this is also run
> alter table O add constraint notjuly check (NOT(date1 >= '2009-07-01' and
> date1 < '2009-08-01'))
> If I ran a select *
> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Thursday, July 09, 2009 1:52 PM
> To: Chris Spotts
> Cc: 'postgres list'
> Subject: Re: [GENERAL] constraint checking on partitions
>
> "Chris Spotts" writes:
> > Then this is also run
>
> > alter table O add cons
"Chris Spotts" writes:
> Then this is also run
> alter table O add constraint notjuly check (NOT(date1 >= '2009-07-01' and
> date1 < '2009-07-01'))
> The planner excludes correctly except that it always checks O.
What are the actual queries you're hoping it will exclude for?
On Thu, Jul 9, 2009 at 11:15 AM, Chris
Barnes wrote:
> We are curious about using SAN with fiber channel hba and if anyone else
> uses this technology.
>
> We would also like to know if people have preference to the level of raid
> with/out striping.
I used SurfRAID Triton external RAID units conn
Alban Hertroys wrote:
I don't know how you partitioned your zpools, but to me it seems like
it'd be preferable to have the PostgreSQL tablespaces (and possibly
other data that's likely to be accessed randomly) in a separate zpool
from the rest of the system so you can restrict disabling prefetc
I have several partitions on a history table that are partitioned by a date
range (monthly). However, it's possible for an unexpected but valid date
(either very far in the future or very far in the past) to come in the data
set and so there is an "overflow" table.
Say table A is parent, B is Ap
In response to Willy-Bas Loos :
>
> Whenever i start a big action, like inserting millions of recs or doing a
> large update, the autovacuum fires on top of that.
> It has some adverse effects on performance when i need it most. More than
> once a postgres service crashed on me because of it.
> Su
m_li...@yahoo.it wrote:
> testinsert contains t values between '2009-08-01' and '2009-08-09', and ne_id
> from 1 to 2. But only 800 out of 2 ne_id have to be read; there's no
> need for a table scan!
> I guess this is a reflection of the poor "correlation" on ne_id; but, as I
> said, I
Hi,
On Thursday 09 July 2009 19:25:15 Willy-Bas Loos wrote:
> Whenever i start a big action, like inserting millions of recs or doing a
> large update, the autovacuum fires on top of that.
You can configure autovacuum to use less resources.
http://www.postgresql.org/docs/current/static/runtime-con
Willy-Bas Loos escribió:
> Hi,
>
> Whenever i start a big action, like inserting millions of recs or doing a
> large update, the autovacuum fires on top of that.
> It has some adverse effects on performance when i need it most. More than
> once a postgres service crashed on me because of it.
> Sur
Hi,
Whenever i start a big action, like inserting millions of recs or doing a
large update, the autovacuum fires on top of that.
It has some adverse effects on performance when i need it most. More than
once a postgres service crashed on me because of it.
Sure, it had too little memory, but it wou
On Jul 9, 7:53 pm, d...@archonet.com (Richard Huxton) wrote:
> S Arvind wrote:
> > Hi Members,
> > Is there any way to sync a single table across the DBs. We need a single
> > table alone to be equal in all DBs in the single postgres. Presenly we are
> > doing this with the help of Update, insert
> As Greg commented upthread, we seem to be getting forced to the
> conclusion that the initial buffer scan in BufferSync() is somehow
> causing this. There are a couple of things it'd be useful to try
> here:
>
> * see how the size of the hiccup varies with shared_buffers;
I tried decreasing sha
Hello
when you use older pg than 8.3, please, use GROUP BY.
SELECT field FROM table GROUP BY field.
Regards
Pavel Stehule
2009/7/9 Ben Harper :
> Hi,
> Can anybody explain this:
>
> Records: 600,000
> Field Width: Approximately 20 UTF8 characters - type is VARCHAR(25)
> Field is Indexed.
>
> SE
On Thursday 09 July 2009 17:09:13 Ben Harper wrote:
> Hi,
> Can anybody explain this:
>
> Records: 600,000
> Field Width: Approximately 20 UTF8 characters - type is VARCHAR(25)
> Field is Indexed.
>
> SELECT DISTINCT field FROM table;
>
> Takes about 6 seconds. There are 111 distinct items.
>
> On
In response to Ben Harper :
> Hi,
> Can anybody explain this:
>
> Records: 600,000
> Field Width: Approximately 20 UTF8 characters - type is VARCHAR(25)
> Field is Indexed.
>
> SELECT DISTINCT field FROM table;
>
> Takes about 6 seconds. There are 111 distinct items.
What's the output of EXPLAI
Hi,
Thank you all for your kind responses.
Things however aren't falling in place.
Will take a short break, rework, and get back.
Probably, with a new problem!
Thanks again,
Regards
Swati
Hi all,
I am trying to calculate an 8 hour moving average using the new
Window functions without success. Here is what I am trying to do :
-- create test table
CREATE TABLE temperatures
(
fulldate timestamp NOT NULL PRIMARY KEY,
value numeric
);
-- inserts
INSERT INTO temperatures
You assistance is appreciated.
I have question regarding disk
storage for postgres servers
We are thinking long term about
scalable storage and performance and would like some advise
or feedback about
what other people are using.
We would like to get as much
performance from our file s
Hi,
Can anybody explain this:
Records: 600,000
Field Width: Approximately 20 UTF8 characters - type is VARCHAR(25)
Field is Indexed.
SELECT DISTINCT field FROM table;
Takes about 6 seconds. There are 111 distinct items.
On Sqlite, and another place where I have a B+Tree, this query is
faster th
Hello Laurenz,
Thank you for your very thorough walk through the 'ecpg use' of threads with
respect to the sqlca. It was very clear and specific. I reproduced what you did
almost exactly as you have done and I could then also play around with things
to see what happens 'if'... I have lear
On Jul 9, 2009, at 3:53 AM, Yaroslav Tykhiy wrote:
On 08/07/2009, at 8:39 PM, Alban Hertroys wrote:
On Jul 8, 2009, at 2:50 AM, Yaroslav Tykhiy wrote:
IIRC prefetch tries to keep data (disk blocks?) in memory that it
fetched recently.
What you described is just a disk cache. And a trivial
I'm a big libpqxx user, and can confirm that it's a great wrapper, but
good luck getting it to build in Dev-c++, which hasn't had a new
release in 4 years and isn't supported by pqxx. I'm not sure of the
specifics, but Dev-c++ uses a MinGW port of GCC. libpqxx does support
MinGW, but probably not t
John R Pierce schrieb:
>ramon rhey serrano wrote:
>> Hi Sir John,
>>
>> Thanks for the links and reply.
>> I downloaded this "libpqxx-3.0" but i really don't know what to do
>> with the file and where to put them. I'm still having hard time how to
>> start the C++ program using Dev C++ as my IDE
S Arvind wrote:
Hi Members,
Is there any way to sync a single table across the DBs. We need a single
table alone to be equal in all DBs in the single postgres. Presenly we are
doing this with the help of Update, insert, delete trigger.
Is there any other best way for that.
Most trigger-based
On Thu, Jul 9, 2009 at 11:06 AM, serafin segador wrote:
> i found where the error is. thanks for the tip.
>
> i have versions 8.3.7 and 8.4 of PostgreSQL running on the same server, as
> well as versions 1.8 and 1.10 of pgAdmin. although i run the backup routine
> for pg8.4 from pgAdmin 1.10, the
i found where the error is. thanks for the tip.
i have versions 8.3.7 and 8.4 of PostgreSQL running on the same server, as
well as versions 1.8 and 1.10 of pgAdmin. although i run the backup
routine for pg8.4 from pgAdmin 1.10, the pdAdmin uses
PostgreSQL\8.3\bin\pg_dump as default. i no
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Albe,
Txs for replying.
I did all the necessary mods to the postgresql.conf and put as much
info as I could in the logs.
The 'culprit' was localhost with no discernible reason whatsoever
... :O
Anyway, after exausting all possbilities and l
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Rey,
If you give QT4 a try I might be able to help you there ;-)
Regards
Pedro Doria Meunier
GSM: +351 96 17 20 188
Skype: pdoriam
ramon rhey serrano wrote:
>
> Hello,
>
> I'm thinking of making a program in C++ (API) that will connect,
> r
Hi Members,
Is there any way to sync a single table across the DBs. We need a single
table alone to be equal in all DBs in the single postgres. Presenly we are
doing this with the help of Update, insert, delete trigger.
Is there any other best way for that.
Arvind S
What version of pg_dump and pgAdmin are you using?
In PostgreSQL version 8.3 it's reltriggers, in version 8.4 it's
relhastriggers. Just run the next query to see the difference:
SELECT * FROM pg_class;
Using pgAdmin 1.10, I don't have any problemes with 8.3 nor 8.4
Regards,
Frank
Op 9 jul
hi!.
i am trying to migrate my database from 8.3.7 to 8.4 (both versions
running on the same server windows 2003, hp ml350 quad core xeon) using
pg_dump and restore using pgAdmin III R 1.10.0, this i was able to do. but
i cannot seem to make a backup on 8.4. i keep getting the following error
me
ramon rhey serrano wrote:
Hi Sir John,
Thanks for the links and reply.
I downloaded this "libpqxx-3.0" but i really don't know what to do
with the file and where to put them. I'm still having hard time how to
start the C++ program using Dev C++ as my IDE, what files do i need to
install, what
Since noone replied to
http://www.mail-archive.com/pgsql-general@postgresql.org/msg133360.html, I
tried another approach:
I can't cluster the whole table every day; it would take too much (as I said,
table as 60M rows, and I have hundreds of them).
Plus, it wouldn't really make much sense: the
64 matches
Mail list logo