Re: [GENERAL] OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?

2009-10-20 Thread Greg Smith

On Tue, 20 Oct 2009, Craig Ringer wrote:

You made an exact image of each drive onto new, spare drives with `dd' 
or a similar disk imaging tool before trying ANYTHING, right? Otherwise, 
you may well have made things worse, particularly since you've tried to 
resync the array. Even if the data was recoverable before, it might not 
be now.


This is actually pretty hard to screw up with Linux software RAID.  It's 
not easy to corrupt a working volume by trying to add a bogus one or 
typing simple commands wrong.  You'd have to botch the drive addition 
process altogether and screw with something else to take out a good drive.



If the problem is just a few bad sectors, you can usually just
force-re-add the drives into the array and then copy the array contents
to another drive either at a low level (with dd_rescue) or at a file
system level.


This approach has saved me more than once.  On the flip side, I have also 
more than once accidentally wiped out my only good copy of the data when 
making a mistake during an attempt at stressed out heroics like this. 
You certainly don't want to wander down this more complicated path if 
there's a simple fix available within the context of the standard tools 
for array repairs.



On a side note: I'm personally increasingly annoyed with the tendency of
RAID controllers (and s/w raid implementations) to treat disks with
unrepairable bad sectors as dead and fail them out of the array.


Given how fast drives tend to go completely dead once the first error 
shows up, this is a reasonable policy in general.


Rather than failing a drive and as a result rendering the whole array 
unreadable in such situations, it should mark the drive defective, set 
the array to read-only, and start screaming for help.


The idea is great, but you have to ask just exactly how the hardware and 
software involved is supposed to enforce making the array read-only.  I 
don't think the ATA and similar command sets have that concept implemented 
in a way you can actually do this at the level it would need to happen at 
for hardware RAID to implement this idea.  Linux software RAID could keep 
you from mounting the array read/write in this situation, but the way 
errors percolate up from the disk devices to the array ones in Linux has 
too many layers in it (especially if LVM is stuck in the middle there too) 
for that to be simple to implement either.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?

2009-10-20 Thread Scott Marlowe
On Wed, Oct 21, 2009 at 12:10 AM, Greg Smith  wrote:
> On Tue, 20 Oct 2009, Ow Mun Heng wrote:
>
>> Raid10 is supposed to be able to withstand up to 2 drive failures if the
>> failures are from different sides of the mirror.  Right now, I'm not sure
>> which drive belongs to which. How do I determine that? Does it depend on the
>> output of /prod/mdstat and in that order?
>
> You build a 4-disk RAID10 array on Linux by first building two RAID1 pairs,
> then striping both of the resulting /dev/mdX devices together via RAID0.

Actually, later models of linux have a direct RAID-10 level built in.
I haven't used it.  Not sure how it would look in /proc/mdstat either.

>  You'll actually have 3 /dev/mdX devices around as a result.  I suspect
> you're trying to execute mdadm operations on the outer RAID0, when what you
> actually should be doing is fixing the bottom-level RAID1 volumes.
>  Unfortunately I'm not too optimistic about your case though, because if you
> had a repairable situation you technically shouldn't have lost the array in
> the first place--it should still be running, just in degraded mode on both
> underlying RAID1 halves.

Exactly.  Sounds like both drives in a pair failed.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?

2009-10-20 Thread Greg Smith

On Tue, 20 Oct 2009, Ow Mun Heng wrote:

Raid10 is supposed to be able to withstand up to 2 drive failures if the 
failures are from different sides of the mirror.  Right now, I'm not 
sure which drive belongs to which. How do I determine that? Does it 
depend on the output of /prod/mdstat and in that order?


You build a 4-disk RAID10 array on Linux by first building two RAID1 
pairs, then striping both of the resulting /dev/mdX devices together via 
RAID0.  You'll actually have 3 /dev/mdX devices around as a result.  I 
suspect you're trying to execute mdadm operations on the outer RAID0, when 
what you actually should be doing is fixing the bottom-level RAID1 
volumes.  Unfortunately I'm not too optimistic about your case though, 
because if you had a repairable situation you technically shouldn't have 
lost the array in the first place--it should still be running, just in 
degraded mode on both underlying RAID1 halves.


There's a good example of how to set one of these up 
http://www.sanitarium.net/golug/Linux_Software_RAID.html ; note how the 
RAID10 involves /dev/md{0,1,2,3} for the 6-disk volume.


Here's what will probably show you the parts you're trying to figure out:

mdadm --detail /dev/md0
mdadm --detail /dev/md1
mdadm --detail /dev/md2

That should give you an idea what md devices are hanging around and what's 
inside of them.


One thing you don't see there is what devices were originally around if 
they've already failed.  I highly recommend saving a copy of the mdadm 
detail (and "smartctl -i" for each underlying drive) on any production 
server, to make it easier to answer questions like "what's the serial 
number of the drive that failed in /dev/md0?".


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How much lines per day?

2009-10-20 Thread Tatsuo Ishii
Hi,

While attending a Linux conference, a guy said that 10,923 lines of
code are added and 5,547 lines of code are deleted per day in average
in Linux development. This is an interesting number and I just wonder
anybody ever tries to calculate these numbers with PostgreSQL.

Does anyone know such number?
--
Tatsuo Ishii
SRA OSS, Inc. Japan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multi-company design/structure ?

2009-10-20 Thread Roderick A. Anderson

John wrote:

Hi,

Is it better to create multi databases or create multi schemas?


John,  I just gave a talk on multi-tenant Pg clusters at PgConf West 
2009 that may help you but ran into vehicle problems and just got home 
this evening.  As quick as I can I'll get the bits-and-pieces pulled 
together and pass them along.


Too frazzled to give it a try this evening.  :-(


\\||/
Rod
--


I am writing a program that can have multi instances.  Sort like a finanical 
accounting system that can have multiable companies. Where each company has a 
different name but the tables are an exact match to each other.  IOW the only 
difference between the company tables is the data that each instance 
contains. 

I believe that if I use multi-schemas within one database it will be easier to 
retrieve data from other schemas.  But what if the data has to be on 
different computers (or multi locations) - is it true I have to insure all 
the schemas are in the same data location?  Or can the schemas be on 
different computers.  

Anyway I'd like to hear from the list opinion as to the best way to design 
this structure.  


Also I believe the database/s will be run on Linux.

Thanks in advance,

Johnf

 






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function returning 2 columns evaluated twice when both columns are needed

2009-10-20 Thread Pavel Stehule
2009/10/21 Merlin Moncure :
> On Tue, Oct 20, 2009 at 5:12 PM, Pavel Stehule  
> wrote:
>>> [ shrug... ]  There are other possible reasons why the planner would
>>> fail to flatten a subquery, but none of them apply to the example you
>>> showed.  And your example function *was* VOLATILE, by default.
>>
>> I checked this on 8.5 and function is evaluated more time although is 
>> immutable.
>>
>> postgres=# create or replace function foo(out a int, out b int)
>> returns record as $$
>> begin
>> raise notice 'start foo';
>> a := 10; b := 20;
>> return;
>> end;
>> $$ language plpgsql immutable;
>> CREATE FUNCTION
>>
>> postgres=# select (foo()).*;
>
> This is because select (func()).* is expanded to mean:
> select func(f1), func(f2) ... func(fn);
>
> This is a general issue with '*' because in postgres it means:
> 'evaluate me for each field of me', not 'return all fields of me'. I
> don't think our behavior in this regard is correct (afaict i'm in the
> minority though).
>

I understand to this mechanism. This is only correction some previous
messages. This behave isn't depend on function immutability or
volatility. But I agree with you, so this is really problem - it is
very silent. Maybe we could to raise some warning or we could to move
funccall to subselect

like
SELECT (foo()).* to SELECT ( (SELECT foo()) ).*

Regards
Pavel Stehule

> merlin
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multi-company design/structure ?

2009-10-20 Thread John
On Tuesday 20 October 2009 11:59:54 am Rich Shepard wrote:
> On Tue, 20 Oct 2009, John wrote:
> > I never even considered using the one database with added company
> > field/column.  On the surface is sounds OK but I'm not to sure.  Each SQL
> > statement would require company_id.  H.
>
> Johnf,
>
> Why not take a look at the sql-ledger code? It's a series of perl
> scripts and open source. Perhaps that will give you some ideas because SL
> can be multi-company, multi-user, multi-currency, etc.
>
> Rich

thanks

Johnf

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function returning 2 columns evaluated twice when both columns are needed

2009-10-20 Thread Merlin Moncure
On Tue, Oct 20, 2009 at 5:12 PM, Pavel Stehule  wrote:
>> [ shrug... ]  There are other possible reasons why the planner would
>> fail to flatten a subquery, but none of them apply to the example you
>> showed.  And your example function *was* VOLATILE, by default.
>
> I checked this on 8.5 and function is evaluated more time although is 
> immutable.
>
> postgres=# create or replace function foo(out a int, out b int)
> returns record as $$
> begin
> raise notice 'start foo';
> a := 10; b := 20;
> return;
> end;
> $$ language plpgsql immutable;
> CREATE FUNCTION
>
> postgres=# select (foo()).*;

This is because select (func()).* is expanded to mean:
select func(f1), func(f2) ... func(fn);

This is a general issue with '*' because in postgres it means:
'evaluate me for each field of me', not 'return all fields of me'. I
don't think our behavior in this regard is correct (afaict i'm in the
minority though).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function returning 2 columns evaluated twice when both columns are needed

2009-10-20 Thread Pavel Stehule
Hello

2009/10/19 Tom Lane :
> Gerhard Wiesinger  writes:
>> On Mon, 19 Oct 2009, Tom Lane wrote:
>>> Probably because you have the function declared VOLATILE.
>
>> None of the function is declared VOLATILE. Any other idea?
>
> [ shrug... ]  There are other possible reasons why the planner would
> fail to flatten a subquery, but none of them apply to the example you
> showed.  And your example function *was* VOLATILE, by default.

I checked this on 8.5 and function is evaluated more time although is immutable.

postgres=# create or replace function foo(out a int, out b int)
returns record as $$
begin
raise notice 'start foo';
a := 10; b := 20;
return;
end;
$$ language plpgsql immutable;
CREATE FUNCTION

postgres=# select (foo()).*;
NOTICE:  start foo
NOTICE:  start foo
 a  │ b
┼
 10 │ 20
(1 row)

I was surprised, there are necessary subselect, but "offset" is optional:

postgres=# select (foo).* from (select foo()) f;
NOTICE:  start foo
 a  │ b
┼
 10 │ 20
(1 row)

postgres=# select (foo).* from (select foo() offset 0) f;
NOTICE:  start foo
 a  │ b
┼
 10 │ 20
(1 row)

regards
Pavel Stehule
>
>                        regards, tom lane
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cast numeric with scale and precision to numeric plain

2009-10-20 Thread Adrian Klaver

- "Sim Zacks"  wrote:

> >> According to the documentation, numeric is stored without any
> leading
> >> or trailing zeros.
> >> 
> >
> > That says *stored*; it doesn't say *displayed*.
> >
> > regards, tom lane
> >   
> If it displays them, it has to know they exist. That means it stores
> it
> somewhere.
> > The part of the above that you need to look at is where it says it
> does not 
> > store 'any extra leading or trailing zeroes'. In your case you
> entered the 
> > value with three trailing zeroes which are taken to be significant
> (see Toms 
> > reply also). If you had inserted just 15 you would have gotten back
> 15.
> >   
> I guess that's a matter of interpretation. To me zeros after a
> decimal
> point without anything else after them are extra.

See below for an explanation of why the above statement is incorrect.
http://en.wikipedia.org/wiki/Significant_figures



Adrian Klaver
akla...@comcast.net


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cast numeric with scale and precision to numeric plain

2009-10-20 Thread Scott Marlowe
2009/10/20 Sim Zacks :
> I guess that's a matter of interpretation. To me zeros after a decimal
> point without anything else after them are extra.

>From a mathematical perspective, they most certainly are not extra.

15.000
15.001
15.002
15.003

Each have three digits of precision, and each mean something here.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL driver for Joomla review

2009-10-20 Thread Thom Brown
2009/10/20 Alban Hertroys :
> You miss a keyword in your query in renameTable; it should be " RENAME TO ",
> not " TO ".

Thanks for spotting that.  I've made my amendments for next submit.

> Wouldn't it be convenient to have an EXPLAIN ANALYSE version of explain()?
> Maybe with a boolean parameter?

The problem is, I don't see where they're using this function, so if
they're using it to EXPLAIN an INSERT, UPDATE or DELETE statement,
ANALYZE would execute it... unless I roll it back straight after.

> In insertObject() you have a query "SELECT $keyName AS 'id' FROM $table'" -
> That line contains two syntax errors: 'id' (You probably meant "id") and
> $table' (spurious trailing quote).

Again, you're right and well spotted. :)  Fixed both of those.

Cheers for helping out Alban! :D

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multi-company design/structure ?

2009-10-20 Thread Rich Shepard

On Tue, 20 Oct 2009, John wrote:


I never even considered using the one database with added company
field/column.  On the surface is sounds OK but I'm not to sure.  Each SQL
statement would require company_id.  H.


Johnf,

   Why not take a look at the sql-ledger code? It's a series of perl scripts
and open source. Perhaps that will give you some ideas because SL can be
multi-company, multi-user, multi-currency, etc.

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multi-company design/structure ?

2009-10-20 Thread John
On Tuesday 20 October 2009 10:11:53 am Wolfgang Keller wrote:
> > Is it better to create multi databases or create multi schemas?
>
> You're missing one option imho: One database, one schema.
>
> > I am writing a program that can have multi instances.  Sort like a
> > finanical accounting system that can have multiable companies. Where each
> > company has a different name but the tables are an exact match to each
> > other.
>
> Then you could simply add a "company-id" column to every table (and make
> this column part of the primary key of each table). And if you run multiple
> instances of the program within _one_ company, then you can also add an
> "instance-id" column (and make this column also part of the primary key of
> each table).
>
> Sincerely,
>
> Wolfgang

I never even considered using the one database with added company 
field/column.  On the surface is sounds OK but I'm not to sure.  Each SQL 
statement would require company_id.  H.

Johnf



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL driver for Joomla review

2009-10-20 Thread Alban Hertroys

On 20 Oct 2009, at 14:02, Thom Brown wrote:


If anyone has a moment, could they review the PostgreSQL driver I
wrote for Joomla's next major release?  The developers at Joomla have
listened to the persistent noise created about only having MySQL as an
option and are now accepting submissions for alternative database
systems. (see 
http://groups.google.com/group/joomla-dev-cms/browse_thread/thread/1382dc6f4af56278#msg_9b95648941ef6fa7
for this development)

My submission can be found at:
http://joomlacode.org/gf/project/joomla/tracker/?action=TrackerItemEdit&tracker_item_id=18384

Things to watch out for a version-dependent features and clumsy  
implementations.



You miss a keyword in your query in renameTable; it should be " RENAME  
TO ", not " TO ".


Wouldn't it be convenient to have an EXPLAIN ANALYSE version of explain 
()? Maybe with a boolean parameter?


In insertObject() you have a query "SELECT $keyName AS 'id' FROM  
$table'" - That line contains two syntax errors: 'id' (You probably  
meant "id") and $table' (spurious trailing quote).


Regards,
Alban Hertroys

--
Screwing up is the best way to attach something to the ceiling.


!DSPAM:737,4addf36611682031315238!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Count occurrences of pattern in string

2009-10-20 Thread Andreas Kretschmer
dario@libero.it  wrote:

> Hello,
> 
> I'm looking for a function to count the occurrences of a pattern in a 
> string. E.g. something like:
> 
> fun_count_pattern('fooXblaX', 'X')
> 
> which would 
> return 2 (I.e. pattern 'X' found 2 times in string 'fooXblaX').

How about:

test=*# select length('fooXblaX') - 
length(regexp_replace('fooXblaX','X','','g')) / length('X');
 ?column?
--
2
(1 Zeile)



Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] multi-company design/structure ?

2009-10-20 Thread Wolfgang Keller
> Is it better to create multi databases or create multi schemas?

You're missing one option imho: One database, one schema.

> I am writing a program that can have multi instances.  Sort like a finanical 
> accounting system that can have multiable companies. Where each company has a 
> different name but the tables are an exact match to each other.

Then you could simply add a "company-id" column to every table (and make this 
column part of the primary key of each table). And if you run multiple 
instances of the program within _one_ company, then you can also add an 
"instance-id" column (and make this column also part of the primary key of each 
table).

Sincerely,

Wolfgang

-- 
NO "Courtesy Copies" PLEASE!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Count occurrences of pattern in string

2009-10-20 Thread dario....@libero.it
Hello,

I'm looking for a function to count the occurrences of a pattern in a 
string. E.g. something like:

fun_count_pattern('fooXblaX', 'X')

which would 
return 2 (I.e. pattern 'X' found 2 times in string 'fooXblaX').

I could write 
my own function for this (probably using plpython?) but I was wandering whether 
there is some function or combination of functions that I could use 'off-the-
shelf'.

Thanks very much

All the best

Dario

PS: I'm using PostgreSQL 8.3 on 
Windows XP.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OT: Re: [sfpug] 10/13 SFPUG meeting, "The Mighty GUCS," video now available

2009-10-20 Thread Gerhard Wiesinger

On Tue, 20 Oct 2009, Christophe Pettus wrote:



On Oct 20, 2009, at 8:32 AM, Viktor Rosenfeld wrote:

@Christophe, I enjoyed your talk very much, particularly because I
learned about pgfouine, which from the looks of it, will make my current
project vastly simpler.  So, thank you.


You should really thank Josh Berkus; he's the one who gave the talk.  I just 
run the camera. :)


Josh has the slides and sample files, and I'm sure he'll post them shortly. 
I know he's been busy with a client emergency and the PostgreSQL conference 
this weekend.




They are already available from:
http://www.pgexperts.com/presentations.html
http://www.pgexperts.com/document.html?id=34
http://www.pgexperts.com/document.html?id=36

Ciao,
Gerhard

--
http://www.wiesinger.com/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OT: Re: [sfpug] 10/13 SFPUG meeting, "The Mighty GUCS," video now available

2009-10-20 Thread Christophe Pettus


On Oct 20, 2009, at 8:32 AM, Viktor Rosenfeld wrote:

@Christophe, I enjoyed your talk very much, particularly because I
learned about pgfouine, which from the looks of it, will make my  
current

project vastly simpler.  So, thank you.


You should really thank Josh Berkus; he's the one who gave the talk.   
I just run the camera. :)


Josh has the slides and sample files, and I'm sure he'll post them  
shortly.  I know he's been busy with a client emergency and the  
PostgreSQL conference this weekend.


--
-- Christophe Pettus
   x...@thebuild.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sharedbuffers and WorkMem

2009-10-20 Thread Greg Smith

On Tue, 20 Oct 2009, Waldomiro wrote:

So, I put in sharedbuffers the same size my two biggest tables, which are the 
most read too. 11 GB + 4 GB. total shared buffers = 15 GB


That seems quite reasonable.  The rest of the unused RAM in the server is 
going to be used by the operating system cache, which works as a layer on 
top of shared_buffers.  There are a couple of problems with giving most of 
your RAM to the database directly.  The three most obvious ones are that 
it doesn't leave anything for other applications, the PostgreSQL 
shared_buffers design isn't optimized for really large amounts of RAM, and 
memory given to shared_buffers has to be involved in the database 
checkpoint computations--whereas OS cached buffers are not.



After that I put 100 MB to each expected user, 300 * 100 MB = 30 GB
So, I used only 30 GB + 15GB = 45 GB total RAM


Leaving 19GB for the operating system to cache things with.  It's not 
going to be unused.  That's quite reasonable, and more flexible as far as 
what the server can accomplish than had you given more memory to 
shared_buffers.  It might even perform better--there's been some evidence 
that shared_buffers starts to degrade going much beyond the 10GB range 
anyway.


Your starting configuration seems fine to me, I would suggest getting your 
application running and measure actual memory use before tweaking anything 
further.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] OT: Re: [sfpug] 10/13 SFPUG meeting, "The Mighty GUCS," video now available

2009-10-20 Thread Viktor Rosenfeld
And there I thought I was considerate when I didn't post during the
week-end.

It's also curious that "some gratitude is in order" while the developers
owe us nothing.  (Not that I think they do, I just think those two
attitudes contradict each other.)

@Christophe, I enjoyed your talk very much, particularly because I
learned about pgfouine, which from the looks of it, will make my current
project vastly simpler.  So, thank you.

Cheers,
Viktor

David Fetter wrote:

> Folks,
> 
> Christophe provides his very specialized professional services /pro
> bono/.  Please stop treating him and those services as though they
> were something you were owed.  You are not.  The media will get posted
> when it gets posted, and some gratitude is in order.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When much of a DB is large objects - PG 8.4

2009-10-20 Thread David Wall



the bytea limit is 1gb (as are all datums in postgres).
pg_largeobject can go up to 2gb, but in either case you'd likely run
into severe performance/scalability issues long before objects began
approaching those size because of memory usage and other issues.  With
100kb objects though, you should be all right.
  


Thanks, Merlin.  Yes, I am not worried so much about the size limits of 
a given field or row (not yet anyway). 

But I am concerned that all my data across all my tables really ends up 
in a single pg_largeobject table, which seems like it could be a 
bottleneck.  Since it's blobs and generally big, I figure repeated 
access to the table doesn't really even benefit from caching that much 
like if I had a small table that was being hit a lot.  I am worried 
about the overall table size of pg_largeobject as blobs are inserted in 
my various tables, but they only get an OID stored, whereas 
pg_largeobject gets all of the data across all of those tables.


I am concerned with backups/restores, crash recovery, partitioning 
options, etc. if most of my data is stored in the single pg_largeobject 
table.  Can it be partitioned?  How many blobs can it hold before it 
won't perform well?


And is there any performance gain if I had several pg_largeobject-like 
tables that I built myself using bytea types as it does?  I mean, does 
PG perform any better if my queries are across a series of tables all 
with their own byteas rather than using LOs?



libpq supports a binary protocol mode which allows you to execute
queries sending bytea without escaping.  (I'm not familiar with the
jdbc driver, but I'd imagine it should support it in some fashion).  l
would start researching there: find out if the jdbc driver supports
binary queries and use them if possible.  If you can't or won't be use
jdbc in this way, your options are to stick with large objects or try
and figure out another way to get data into the database.
  
Thanks.  I'll see what I can learn about bytea escaping in JDBC as I 
don't see anything obvious in its JDBC-specific PG docs.  Perhaps it no 
longer suffers from each byte being converted into escaped characters, 
which really balloons already big enough data for the transfer 
protocols, and that would be great.  Of course, it's all moot if there 
would not be any real performance gain to be had by having these objects 
stored across multiple tables rather than all being in pg_largeobject 
(most such data is not deleted, though it is often updated, so vacuumlo 
running daily generally isn't a problem for us).


David

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] different execution times of the same query

2009-10-20 Thread Viktor Rosenfeld
Hi,

I suppose the same is true for the time logged with log_duration, i.e.
it does not include the time needed to transfer the data to the client?

Or is it more like \timing in that respect?

Cheers,
Viktor

Craig Ringer wrote:

> EXPLAIN ANALYZE measures the time taken to execute the query. It doesn't
> include time required to transfer results to the client, possibly write
> them to disk, etc. It's really only for examining query plans as
> compared to actual execution of that plan.
> 
> If you're more interested in *total* query execution time, including
> planning, execution, and transfer of results, you should usually use
> psql's \timing command, possibly along with output redirection. eg:

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cast numeric with scale and precision to numeric plain

2009-10-20 Thread Sim Zacks

>> According to the documentation, numeric is stored without any leading
>> or trailing zeros.
>> 
>
> That says *stored*; it doesn't say *displayed*.
>
>   regards, tom lane
>   
If it displays them, it has to know they exist. That means it stores it
somewhere.
> The part of the above that you need to look at is where it says it does not 
> store 'any extra leading or trailing zeroes'. In your case you entered the 
> value with three trailing zeroes which are taken to be significant (see Toms 
> reply also). If you had inserted just 15 you would have gotten back 15.
>   
I guess that's a matter of interpretation. To me zeros after a decimal
point without anything else after them are extra.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sharedbuffers and WorkMem

2009-10-20 Thread Grzegorz Jaśkiewicz
On Tue, Oct 20, 2009 at 4:22 PM, Waldomiro  wrote:

> Hi everybody,
>
> I have some doubts about sharedmemory and work_mem.
>
> I have a server:
>
> 64 GB RAM
> 2 processors 4 cores each one intel xeon 3 Ghz
> expecting 300 users in the same time
>
> So, I put in sharedbuffers the same size my two biggest tables, which are
> the most read too. 11 GB + 4 GB. total shared buffers = 15 GB
>
> After that I put 100 MB to each expected user, 300 * 100 MB = 30 GB
>
> So, I used only 30 GB + 15GB = 45 GB total RAM
>
> My doubt is, can I use more RAM for shared buffers? about 30 GB (50% of
> total RAM)?
>
> What is the max shared buffers I can use? How can I calculate that?
>
> what's the db version? what OS ?
have you tried pgtune ? (if db 8.3 or 8.4)



-- 
GJ


[GENERAL] Sharedbuffers and WorkMem

2009-10-20 Thread Waldomiro

Hi everybody,

I have some doubts about sharedmemory and work_mem.

I have a server:

64 GB RAM
2 processors 4 cores each one intel xeon 3 Ghz
expecting 300 users in the same time

So, I put in sharedbuffers the same size my two biggest tables, which 
are the most read too. 11 GB + 4 GB. total shared buffers = 15 GB


After that I put 100 MB to each expected user, 300 * 100 MB = 30 GB

So, I used only 30 GB + 15GB = 45 GB total RAM

My doubt is, can I use more RAM for shared buffers? about 30 GB (50% of 
total RAM)?


What is the max shared buffers I can use? How can I calculate that?

Thanks,

Waldomiro.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PANIC : right sibling's left-link doesn't match

2009-10-20 Thread Tom Lane
"Ow Mun Heng"  writes:
> [resend w/ plain text only - Sorry]
> right sibling's left-link doesn't match: block 121425 links to 124561
> instead of expected 121828 in index
> Oct 20 22:21:29 hmweb5 postgres[8795]: [3-2]  "d_trh_trr_water_eval_pkey"

> What does the above mean?

It means you've got a corrupted index.  REINDEX will probably fix it.
You should try to figure out what caused the problem though ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Free Tool to design Postgres Databases

2009-10-20 Thread David Fetter
On Tue, Oct 20, 2009 at 07:56:48AM -0400, Martin Gainty wrote:
> 
> Wolfgang
> 
> I am compiling 8.4.1 and am seeking any/all architecture
> documents..right now the architecture (relationship of classes) i
> see is extracted from makefile dependencies An architecture document
> will detail the layout of the System Tables and their classes(sorry
> as neither C++/java is not the codebase this would be C functions)
> something like

Martin,

Mapping these low-level details to classes is exactly your wrong move
for the same reason that mapping disk blocks to classes is.  It's
digging into implementation details--private methods--rather than
looking at the much higher level of abstraction that really concerns
it.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: 答复: [GENERAL] About could not connect to server: Connection timed out

2009-10-20 Thread Tom Lane
Craig Ringer  writes:
> Anyway, let me make sure I understand what you are saying. After you
> reboot the server, just after the PostgreSQL service has started up,
> there are several minutes where some (but not all) client connections
> tend to time out. After that initial problem period, things start to
> work properly again and the time-outs stop happening. You only have
> problems shortly after PostgreSQL (and usually the whole server) has
> been re-started.

Maybe there's some firewall filtering that takes a while to start up
fully?

What I'd try to determine is whether this behavior occurs when PG itself
is restarted without rebooting the whole machine.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cast numeric with scale and precision to numeric plain

2009-10-20 Thread Tom Lane
Sim Zacks  writes:
>>> I'm not sure offhand what is the easiest way to suppress trailing
>>> zeroes, but casting to plain numeric is not the route to a solution.

> According to the documentation, numeric is stored without any leading
> or trailing zeros.

That says *stored*; it doesn't say *displayed*.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PANIC : right sibling's left-link doesn't match

2009-10-20 Thread Ow Mun Heng
[resend w/ plain text only - Sorry]

right sibling's left-link doesn't match: block 121425 links to 124561
instead of expected 121828 in index
Oct 20 22:21:29 hmweb5 postgres[8795]: [3-2]  "d_trh_trr_water_eval_pkey"


WARNING:  terminating connection because of crash of another server process
DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because
WARNING:  terminating connection because of crash of another server process

What does the above mean?

This is causing DB to have bad reactions.

Thanks.

Is there an index error? Should I drop and re-create the index is it?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PANIC : right sibling's left-link doesn't match

2009-10-20 Thread Ow Mun Heng
right sibling's left-link doesn't match: block 121425 links to 124561
instead of expected 121828 in index

Oct 20 22:21:29 hmweb5 postgres[8795]: [3-2]  "d_trh_trr_water_eval_pkey"

 

 

WARNING:  terminating connection because of crash of another server process

DETAIL:  The postmaster has commanded this server process to roll back the
current transaction and exit, because

WARNING:  terminating connection because of crash of another server process

 

What does the above mean?

 

This is causing DB to have bad reactions.

 

 

 



[GENERAL] Re: [sfpug] 10/13 SFPUG meeting, "The Mighty GUCS," video now available

2009-10-20 Thread David Fetter
On Tue, Oct 20, 2009 at 08:02:06AM +0200, Viktor Rosenfeld wrote:
> Hi,
> 
> I second the request for the files refered to in the video --
> particularly postgresql.conf.simple and dependencies.

Folks,

Christophe provides his very specialized professional services /pro
bono/.  Please stop treating him and those services as though they
were something you were owed.  You are not.  The media will get posted
when it gets posted, and some gratitude is in order.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] CSV files & empty strings

2009-10-20 Thread Raymond O'Donnell
On 20/10/2009 05:55, Nate Randall wrote:
> I am new to Postgre, having formerly used Filemaker Pro.  I need to
> import CSV files into Postgre which contain some empty string "" values
> in several date-type fields.  I am able to import the CSV files into
> Postgre with the COPY FROM command if the datefields are converted to
> text fields in Postgre.  However, I need some method of "converting" the
> empty string "" values into NULL values after import, so that I can
> change the date fields back to date-type.  Any suggestions on how this
> could be accomplished?

How about:

  update  set  = null where  = '';

?


Ray.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 答复: [GENERAL] About could not connect to server: Connection timed out

2009-10-20 Thread 黄永卫
Thanks for you reply!
Server and the client conect with the same CISCO switch.
We have checked the switch ,but no CRC error occur.
This issue always occur after we reboot the server and the postgres service
just   become ready statu for serval several minutes.
It is possible that server's performance cause the issue (server is too busy
on that moment) ?

Thank you!

Ray Huang

-邮件原件-
发件人: Craig Ringer [mailto:cr...@postnewspapers.com.au] 
发送时间: 2009年10月20日 星期二 12:34
收件人: 黄永卫
抄送: pgsql-general@postgresql.org
主题: Re: [GENERAL] About could not connect to server: Connection timed out

On Tue, 2009-10-20 at 12:12 +0800, 黄永卫 wrote:
> 
> Oct 18 10:46:11 SUC02 postgres[10997]: [2-1] LOG:  unexpected EOF on
> client connection
> Oct 18 10:46:11 SUC02 postgres[15899]: [2-1] LOG:  could not receive
> data from client: Connection reset by peer

That looks a lot like lower-level networking issues. Is there any NAT
between client and server? What sort of network connects the two? Is
there any packet loss on the network? Is there any sort of firewall on
or between the client and server?

--
Craig Ringer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CSV files & empty strings

2009-10-20 Thread Nate Randall
I am new to Postgre, having formerly used Filemaker Pro.  I need to import
CSV files into Postgre which contain some empty string "" values in several
date-type fields.  I am able to import the CSV files into Postgre with the
COPY FROM command if the datefields are converted to text fields in
Postgre.  However, I need some method of "converting" the empty string ""
values into NULL values after import, so that I can change the date fields
back to date-type.  Any suggestions on how this could be accomplished?


Thanks,

-Nate


Re: [GENERAL] could not open process token: error code 5

2009-10-20 Thread Andale

When I try to initiliase a new database from command prompt i get the
following message.

"C:\Program Files\PostgreSQL\8.2\bin>initdb -D d:\data\postgres\dbnew
The program "postgres" is needed by initdb but was not found in the
same directory as "C:\Program Files\PostgreSQL\8.2\bin/initdb".
Check your installation."

Please note that the slash is in the wrong direction in the error message
(before initdb) and the "postgres.exe" is there in the bin directory.

Even when I have renewed the installation the same happens.

Still no progress.

/Anders


Adrian Klaver wrote:
> 
> On Wednesday 14 October 2009 6:42:39 am Andale wrote:
>> Hi
>>
>> We have an Postgresql 8.2.5 installation on a Windows server 2003 that
>> have
>> worked perfectly for our Mediawiki until we tried to update to 8.4.
>> Before
>> the update we took a backup, stopped the service and took a copy of the
>> entire database catalog. We could not make the 8.4 (installed in a
>> different directory) work so we decided to go back to the initial
>> installation which remained intact.
>>
>> Then when we try to start the service it fails and we get the message
>> "could not open process token: error code 5" in the event viewer, nothing
>> else. Now after googling for some hours and days I am stuck. the Postgres
>> user are to start the service and so nothing is changed there either.
>> Even
>> though the database files were not changed, we have also copied the
>> entire
>> original database back. The installation has been done with the
>> "postgresql-8.2-int.msi" package and it has been reapplied with the
>> following command, "msiexec /i postgresql-8.2-int.msi REINSTALLMODE=vamus
>> REINSTALL=ALL /"
>>
>> Still no progress.
>>
>> What to do?
>>
>> /Anders
> 
> Have you tried getting rid of the data directory you copied back, doing an 
> initdb to create a new fresh data directory and the restoring from the
> backup?  
> Just to cover the case where you did not copy everything you needed to
> when you 
> made the original copy.
> 
> 
> 
> -- 
> Adrian Klaver
> akla...@comcast.net
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 
> 

-- 
View this message in context: 
http://www.nabble.com/could-not-open-process-token%3A-error-code-5-tp25891332p25973438.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not open process token: error code 5

2009-10-20 Thread Andale

I have checked the items below but they are all correct including the
shared-memory setting.

/Anders




Richard Huxton wrote:
> 
> el dorado wrote:
>> Hello.
>> I had such an error but rather long ago. Unfortunately I don't remember
>> all the details but you could try to do the followig:
>> - check if the directory 'data' has read/write rights for your OS account
>> (account under which you try to start postgres).
>> - check if your OS account has the right to log on as service
>> (Administrative Tools/Local Security Settings/User Rights Assignment)
>> - check in Computer Management/Local Users and Groups/Users if your OS
>> account is NOT the member of any group of users.
> 
> All good advice - if you're trying to copy a file-level backup into
> place you should check the permissions/ownership of the files after
> restoring them.
> 
> Also - I seem to remember you could get this if the shared-memory
> settings were too high in postgresql.conf
> 
> -- 
>   Richard Huxton
>   Archonet Ltd
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 
> 

-- 
View this message in context: 
http://www.nabble.com/could-not-open-process-token%3A-error-code-5-tp25891332p25973423.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL driver for Joomla review

2009-10-20 Thread Thom Brown
2009/10/20 Merlin Moncure :
> On Tue, Oct 20, 2009 at 8:02 AM, Thom Brown  wrote:
>> If anyone has a moment, could they review the PostgreSQL driver I
>> wrote for Joomla's next major release?  The developers at Joomla have
>> listened to the persistent noise created about only having MySQL as an
>> option and are now accepting submissions for alternative database
>> systems. (see 
>> http://groups.google.com/group/joomla-dev-cms/browse_thread/thread/1382dc6f4af56278#msg_9b95648941ef6fa7
>> for this development)
>>
>> My submission can be found at:
>> http://joomlacode.org/gf/project/joomla/tracker/?action=TrackerItemEdit&tracker_item_id=18384
>>
>> Things to watch out for a version-dependent features and clumsy 
>> implementations.
>
> *) why bother with the version check in queryBatch?  why not just do
> begin->commit always?

To be honest, I hadn't touched that function.  That's entirely
inherited from the MySQL one, but you're right, for Postgres there's
no need to check the version.

> *) no ability to delete?

The driver has to match that of the MySQL one in Joomla, so I couldn't
implement functionality which wouldn't be used by every driver.  The
only thing I added which differed was a concatentation operator which
I put in for future implementation as I believe it will be necessary
if they're going to adapt the existing codebase to support multiple
database systems.

> *) looks like typo on line 713

Yes, that's just left over from me quickly trying to tidy up my
comments as I couldn't get the proper version from the site.  Please
ignore it as it isn't in my actual submission.

Thanks Merlin!

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] multi-company design/structure ?

2009-10-20 Thread John
Hi,

Is it better to create multi databases or create multi schemas?

I am writing a program that can have multi instances.  Sort like a finanical 
accounting system that can have multiable companies. Where each company has a 
different name but the tables are an exact match to each other.  IOW the only 
difference between the company tables is the data that each instance 
contains. 

I believe that if I use multi-schemas within one database it will be easier to 
retrieve data from other schemas.  But what if the data has to be on 
different computers (or multi locations) - is it true I have to insure all 
the schemas are in the same data location?  Or can the schemas be on 
different computers.  

Anyway I'd like to hear from the list opinion as to the best way to design 
this structure.  

Also I believe the database/s will be run on Linux.

Thanks in advance,

Johnf

 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL driver for Joomla review

2009-10-20 Thread Merlin Moncure
On Tue, Oct 20, 2009 at 8:02 AM, Thom Brown  wrote:
> If anyone has a moment, could they review the PostgreSQL driver I
> wrote for Joomla's next major release?  The developers at Joomla have
> listened to the persistent noise created about only having MySQL as an
> option and are now accepting submissions for alternative database
> systems. (see 
> http://groups.google.com/group/joomla-dev-cms/browse_thread/thread/1382dc6f4af56278#msg_9b95648941ef6fa7
> for this development)
>
> My submission can be found at:
> http://joomlacode.org/gf/project/joomla/tracker/?action=TrackerItemEdit&tracker_item_id=18384
>
> Things to watch out for a version-dependent features and clumsy 
> implementations.

*) why bother with the version check in queryBatch?  why not just do
begin->commit always?
*) no ability to delete?
*) looks like typo on line 713


keep up the good work...

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL driver for Joomla review

2009-10-20 Thread Thom Brown
2009/10/20 Reid Thompson :
> your attachment contains this...
>
> 
>          "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd";>
> http://www.w3.org/1999/xhtml"; xml:lang="en" lang="en">
>  
>  403 - Forbidden
>  
>  
>  403 - Forbidden
>  
> 
>

Erk.. that's weird.  I got that too even after being logged in.  I'm
not sure how anyone can review it if no-one has access to it.

I've attached my working version which differs only slightly to
conform with coding-styles required by Joomla.

Apologies

Thom
<>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cast numeric with scale and precision to numeric plain

2009-10-20 Thread Adrian Klaver
On Tuesday 20 October 2009 3:39:22 am Sim Zacks wrote:
> Numeric with scale precision always shows the trailing zeros.
> Numeric plain only shows numbers after the decimal point that are being
> used.
>
>
> That statement is false:
>
> regression=# select 1234.000::numeric;
>  numeric
> --
>  1234.000
> (1 row)
>
> I'm not sure offhand what is the easiest way to suppress trailing
> zeroes, but casting to plain numeric is not the route to a solution.
>
> Really this is a textual formatting problem.  You might find that the
> best bet is something with trim() or a regexp.  The trick would be
> not removing zeroes that are significant ...
>
> regards, tom lane
>
>  According to the documentation, numeric is stored without any leading or
> trailing zeros.
> http://www.postgresql.org/docs/current/static/datatype-numeric.html
>
> Numeric values are physically stored without any extra leading or trailing
> zeroes. Thus, the declared precision and scale of a column are maximums,
> not fixed allocations. (In this sense the numeric type is more akin to
> varchar(n) than to char(n).) The actual storage requirement is two bytes
> for each group of four decimal digits, plus eight bytes overhead. However,
> in practice:
>  create table test(f1 numeric);
>  insert into test(f1)values(15.000);
>  select * from test;
>  f1
>  ---
>  15.000

The part of the above that you need to look at is where it says it does not 
store 'any extra leading or trailing zeroes'. In your case you entered the 
value with three trailing zeroes which are taken to be significant (see Toms 
reply also). If you had inserted just 15 you would have gotten back 15.

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL driver for Joomla review

2009-10-20 Thread Thom Brown
2009/10/20 Csaba Nagy :
> Hi Thom,
>
> I would like to review it, but I get "403 - Forbidden" when clicking:
>
> http://downloads.joomlacode.org/trackeritem/4/5/0/45041/postgresql.php
>
> Not sure what that means, probably I need some kind of login to the
> joomla tracker system, and I don't have one, and I would prefer not to
> create one... is it possible to access that somehow without full access
> to the joomla tracker ?

Hi Csaba,

I didn't realise it wasn't publicly viewable.  I've attached a copy of
the php file anyhow.  Can you see the tracker page at all?

Thom
<>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL driver for Joomla review

2009-10-20 Thread Csaba Nagy
Hi Thom,

I would like to review it, but I get "403 - Forbidden" when clicking:

http://downloads.joomlacode.org/trackeritem/4/5/0/45041/postgresql.php

Not sure what that means, probably I need some kind of login to the
joomla tracker system, and I don't have one, and I would prefer not to
create one... is it possible to access that somehow without full access
to the joomla tracker ?

Cheers,
Csaba.

On Tue, 2009-10-20 at 14:02 +0200, Thom Brown wrote:
> If anyone has a moment, could they review the PostgreSQL driver I
> wrote for Joomla's next major release?  The developers at Joomla have
> listened to the persistent noise created about only having MySQL as an
> option and are now accepting submissions for alternative database
> systems. (see 
> http://groups.google.com/group/joomla-dev-cms/browse_thread/thread/1382dc6f4af56278#msg_9b95648941ef6fa7
> for this development)
> 
> My submission can be found at:
> http://joomlacode.org/gf/project/joomla/tracker/?action=TrackerItemEdit&tracker_item_id=18384
> 
> Things to watch out for a version-dependent features and clumsy 
> implementations.
> 
> Thanks
> 
> Thom Brown
> 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL driver for Joomla review

2009-10-20 Thread Thom Brown
If anyone has a moment, could they review the PostgreSQL driver I
wrote for Joomla's next major release?  The developers at Joomla have
listened to the persistent noise created about only having MySQL as an
option and are now accepting submissions for alternative database
systems. (see 
http://groups.google.com/group/joomla-dev-cms/browse_thread/thread/1382dc6f4af56278#msg_9b95648941ef6fa7
for this development)

My submission can be found at:
http://joomlacode.org/gf/project/joomla/tracker/?action=TrackerItemEdit&tracker_item_id=18384

Things to watch out for a version-dependent features and clumsy implementations.

Thanks

Thom Brown

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Free Tool to design Postgres Databases

2009-10-20 Thread Martin Gainty

Wolfgang

I am compiling 8.4.1 and am seeking any/all architecture documents..right now 
the architecture (relationship of classes) i see is extracted from makefile 
dependencies
An architecture document will detail the layout of the System Tables and their 
classes(sorry as neither C++/java is not the codebase this would be C functions)
something like
Tablespace - Tablespace.c
   Extents and allocation Parameters in StorageParams.c
Tables   -  Tables.c
   Indexes and key assignments in Index.c
   Cursor assignments via cursors.c
 ReadCursor.c
 UpdateableCursor.c
   Column assignments on known datatypes
 Int.c
 VARCHAR.c
 Date.c
Roles -  Roles which will be assigned for group and User classes - 
Roles.c
   Users - individual Users which are assigned predefined Roles 
-Users.c
Packages   -  external packages which extend implement Postgres functionality 
for specific Application Systems packages.c
   Postgis - A geographic Information System which enables one 
to provide Geometry and or WKT parameters which map to core Postgres tables - 
Postgis.c

Would appreciate any documentation which identified core functions, maps base 
core functions to actual c file names (and ot their library names)

*gruss*
Martin Gainty 
__ 
Verzicht und Vertraulichkeitanmerkung
 
Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.






> Date: Tue, 20 Oct 2009 12:27:06 +0200
> From: felip...@gmx.net
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Free Tool to design Postgres Databases
> 
> > I'am searching for a free tool to design a Postgre Database. There is
> > something similar to MySQL Workbench?
> 
> http://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools
> 
> I've tried out nearly every (affordable or free) ER tool that's available and 
> so far the one I like the most is DB-Main 
> (http://www.db-main.com/doc.php?id=to1&lang=2). Which is not mentioned on the 
> page above.
> 
> Sincerely,
> 
> Wolfgang
> 
> -- 
> NO "Courtesy Copies" PLEASE!
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  
_
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.
http://clk.atdmt.com/GBL/go/177141664/direct/01/

Re: [GENERAL] [pgeu-general] PGDay.EU 2009 - approaching fast!

2009-10-20 Thread Thom Brown
2009/10/20 Dave Page :
> PGDay.EU 2009 is approaching fast - have you registered yet?
>
> Europe's premier PostgreSQL conference organised by PostgreSQL Europe
> and PostgreSQLfr will be held on November 6th and 7th at ParisTech
> Telecom in Paris, France. With an outstanding lineup of talks over the
> two days of the event, with tracks in English and French, this is the
> must-attend PostgreSQL conference this year!
>
> http://2009.pgday.eu/start
>
> Speakers will include well known community members and developers such
> as Simon Riggs, Gavin M. Roy, Gabriele Bartolini, Dimitri Fontaine,
> Joshua Drake and Guillaume Lelarge speaking on a wide range of topics.
> The full schedule can be seen at http://2009.pgday.eu/schedule
>
> If you are planning on attending, please register as soon as possible
> at http://2009.pgday.eu/register. Early registration will help us
> ensure you get a T-Shirt and conference goodies!
>
> Details of the venue and hotels in the local area can also be found on
> the conference website. If you have yet to book your accommodation, I
> would suggest doing so as soon as possible as Paris is quite busy at
> this time of year.

And there's special hotel rates at the Kyriad Hotel, in Porte d'Ivry
for those attending the conference.  See http://2009.pgday.eu/hotels
for details.

Thom

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PGDay.EU 2009 - approaching fast!

2009-10-20 Thread Dave Page
PGDay.EU 2009 is approaching fast - have you registered yet?

Europe's premier PostgreSQL conference organised by PostgreSQL Europe
and PostgreSQLfr will be held on November 6th and 7th at ParisTech
Telecom in Paris, France. With an outstanding lineup of talks over the
two days of the event, with tracks in English and French, this is the
must-attend PostgreSQL conference this year!

http://2009.pgday.eu/start

Speakers will include well known community members and developers such
as Simon Riggs, Gavin M. Roy, Gabriele Bartolini, Dimitri Fontaine,
Joshua Drake and Guillaume Lelarge speaking on a wide range of topics.
The full schedule can be seen at http://2009.pgday.eu/schedule

If you are planning on attending, please register as soon as possible
at http://2009.pgday.eu/register. Early registration will help us
ensure you get a T-Shirt and conference goodies!

Details of the venue and hotels in the local area can also be found on
the conference website. If you have yet to book your accommodation, I
would suggest doing so as soon as possible as Paris is quite busy at
this time of year.

See you in Paris!

-- 
Dave Page
PGDay.EU 2009 Conference: http://2009.pgday.eu/start

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] different execution times of the same query

2009-10-20 Thread Craig Ringer
On 20/10/2009 6:51 PM, Luca Ferrari wrote:
> On Tuesday 20 October 2009 10:44:13 am Scott Marlowe's cat walking on the 
> keyboard wrote:
>> Two things.  1: Actually running the query and receiving the results
>> isn't the same as just running it and throwing them away (what explain
>> analyze does) and 2: The query may be getting cached in psql if you're
>> running it more than once, but it may not run often enough on that
>> data set to get the same caching each time.
>>
> 
> 
> You are right, in fact executing:
> 
> psql -h localhost -U dataflex cogedb -c "SELECT *  FROM GMMOVART  WHERE DATA  
>> = '01/01/2006' AND DATA  <= '31/12/2006' ORDER BY DATA, CONTATORE, RIGA" -o 
> /dev/null
> 
> produces a log like the following:
> 
> cogedb LOG:  duration: 8841.152 ms  statement: SELECT *  FROM GMMOVART  WHERE 
> DATA  >= '01/01/2006' AND DATA  <= '31/12/2006' ORDER BY DATA, CONTATORE, RIGA
> 
> so 8,8 seconds against 7 seconds, now it sounds compatible. But I was always 
> trusting the time of explain analyze, this make me doubt about it. So how is 
> such time (explain analyze) to mind?

EXPLAIN ANALYZE measures the time taken to execute the query. It doesn't
include time required to transfer results to the client, possibly write
them to disk, etc. It's really only for examining query plans as
compared to actual execution of that plan.

If you're more interested in *total* query execution time, including
planning, execution, and transfer of results, you should usually use
psql's \timing command, possibly along with output redirection. eg:

x=> explain analyze select * from customer;
  QUERY PLAN


--
 Seq Scan on customer  (cost=0.00..434.54 rows=12054 width=218) (actual
time=0.0
08..3.941 rows=12054 loops=1)
 Total runtime: 6.752 ms
(2 rows)

x=> \timing
Timing is on.
x=> \o out.txt
x=> select * from customer;
Time: 135.571 ms
x=> \timing
Timing is off.
x=> \o
x=>


In both cases the contents of the customer table were cached, as I ran
"SELECT * FROM customer" before starting the test.


As you can see, EXPLAIN ANALYZE is reporting how long it took Pg to
execute the query. The psql \timing command reports how long the whole
process took, including psql reading the data from the postgresql server
and writing it out to the file on disk. Big difference!

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?

2009-10-20 Thread Craig Ringer
On 20/10/2009 4:41 PM, Scott Marlowe wrote:

>> I have a 4 disk Raid10 array running on linux MD raid.
>> Sda / sdb / sdc / sdd
>>
>> One fine day, 2 of the drives just suddenly decide to die on me. (sda and
>> sdd)
>>
>> I've tried multiple methods to try to determine if I can get them back
>> online

You made an exact image of each drive onto new, spare drives with `dd'
or a similar disk imaging tool before trying ANYTHING, right?

Otherwise, you may well have made things worse,  particularly since
you've tried to resync the array. Even if the data was recoverable
before, it might not be now.



How, exactly, have the drives failed? Are they totally dead, so that the
BIOS / disk controller don't even see them? Can the partition tables be
read? Does 'file -s /dev/sda' report any output? What's the output of:

smartctl -d ata -a /dev/sda

(repeat for sdd)

?



If the problem is just a few bad sectors, you can usually just
force-re-add the drives into the array and then copy the array contents
to another drive either at a low level (with dd_rescue) or at a file
system level.

If the problem is one or more totally fried drives, where the drive is
totally inaccessible or most of the data is hopelessly corrupt /
unreadable, then you're in a lot more trouble. RAID 10 effectively
stripes the data across the mirrored pairs, so if you lose a whole
mirrored pair you've lost half the stripes. It's not that different from
running paper through a shredder, discarding half the shreds, and lining
it all back up.


On a side note: I'm personally increasingly annoyed with the tendency of
RAID controllers (and s/w raid implementations) to treat disks with
unrepairable bad sectors as dead and fail them out of the array. That's
OK if you have a hot spare and no other drive fails during rebuild, but
it's just not good enough if failing that drive would result in the
array going into failed state. Rather than failing a drive and as a
result rendering the whole array unreadable in such situations, it
should mark the drive defective, set the array to read-only, and start
screaming for help. Way too much data gets murdered by RAID
implementations removing mildly faulty drives from already-degraded
arrays instead of just going read-only.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: 答复: [GENERAL] About could not conne ct to server: Connection timed out

2009-10-20 Thread Craig Ringer
On 20/10/2009 3:01 PM, 黄永卫 wrote:
> Thanks for you reply!
> Server and the client conect with the same CISCO switch.

OK, so they're both on the same local network segment, with the same
subnet and IP address range, connected via a single Ethernet switch?
Guess it's probably not the network.

> This issue always occur after we reboot the server and the postgres service
> just   become ready statu for serval several minutes.

Hang on. You reboot the server? Why?

Anyway, let me make sure I understand what you are saying. After you
reboot the server, just after the PostgreSQL service has started up,
there are several minutes where some (but not all) client connections
tend to time out. After that initial problem period, things start to
work properly again and the time-outs stop happening. You only have
problems shortly after PostgreSQL (and usually the whole server) has
been re-started.

Right?

If so: Could it just be that a rush of reconnecting clients as the
server comes up causes so much load that it can't process all the
requests before some clients give up? The server would then find, when
it got around to answering the client, that the client had since closed
the connection, which would result in the errors you see in the log.

Try keeping an eye on the number of connected clients, the server load,
and the server response time just after it starts up. I'll bet you'll
see disk I/O and/or CPU load max out and connections to other services
on the same server (say: remote desktop, ssh, file sharing, etc) are
also very slow or time out. Do *not* just ping the server; that'll
usually remain nearly instant no matter how overloaded the server is.

If the problem does turn out to be the server being overloaded: Perhaps
you should rate-limit client reconnection attempts? A typical technique
that's used is to have clients re-try connections after a random delay.
That way, rather than a "thundering herd" of clients all connecting at
once, they connect at random intervals over a short period after the
server comes back up, so the server only has to process a few connection
attempts at once. It's also often a good idea to have that random delay
start out quite short, and increase a bit over time.

A search for the "thundering herd problem" will tell you a bit more
about this, though not in PostgreSQL specific terms.

> It is possible that server's performance cause the issue (server is too busy
> on that moment) ?

Highly likely given the additional information you've now provided.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] different execution times of the same query

2009-10-20 Thread Luca Ferrari
On Tuesday 20 October 2009 10:44:13 am Scott Marlowe's cat walking on the 
keyboard wrote:
> Two things.  1: Actually running the query and receiving the results
> isn't the same as just running it and throwing them away (what explain
> analyze does) and 2: The query may be getting cached in psql if you're
> running it more than once, but it may not run often enough on that
> data set to get the same caching each time.
> 


You are right, in fact executing:

psql -h localhost -U dataflex cogedb -c "SELECT *  FROM GMMOVART  WHERE DATA  
>= '01/01/2006' AND DATA  <= '31/12/2006' ORDER BY DATA, CONTATORE, RIGA" -o 
/dev/null

produces a log like the following:

cogedb LOG:  duration: 8841.152 ms  statement: SELECT *  FROM GMMOVART  WHERE 
DATA  >= '01/01/2006' AND DATA  <= '31/12/2006' ORDER BY DATA, CONTATORE, RIGA

so 8,8 seconds against 7 seconds, now it sounds compatible. But I was always 
trusting the time of explain analyze, this make me doubt about it. So how is 
such time (explain analyze) to mind?

Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Free Tool to design Postgres Databases

2009-10-20 Thread Grzegorz Jaśkiewicz
2009/10/20 Grzegorz Jaśkiewicz 

>
>
> On Tue, Oct 20, 2009 at 11:27 AM, Wolfgang Keller wrote:
>
>> > I'am searching for a free tool to design a Postgre Database. There is
>> > something similar to MySQL Workbench?
>>
>> http://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools
>>
>> I've tried out nearly every (affordable or free) ER tool that's available
>> and so far the one I like the most is DB-Main (
>> http://www.db-main.com/doc.php?id=to1&lang=2). Which is not mentioned on
>> the page above.
>>
>> which btw, is a nice program, written in java, but no linux/mac osx
> version/package available. what a shame.
>
>
correction, it's c++ program. But doesn't seem to work with wine here. A
shame.

(I mistake it for some other program).



-- 
GJ


Re: [GENERAL] cast numeric with scale and precision to numeric plain

2009-10-20 Thread Sim Zacks






  
Numeric with scale precision always shows the trailing zeros.
Numeric plain only shows numbers after the decimal point that are being
used.

  
  
That statement is false:

regression=# select 1234.000::numeric;
 numeric  
--
 1234.000
(1 row)

I'm not sure offhand what is the easiest way to suppress trailing
zeroes, but casting to plain numeric is not the route to a solution.

Really this is a textual formatting problem.  You might find that the
best bet is something with trim() or a regexp.  The trick would be
not removing zeroes that are significant ...

			regards, tom lane
  

According to the documentation, numeric is stored without any leading
or trailing zeros.
http://www.postgresql.org/docs/current/static/datatype-numeric.html
Numeric values are physically stored without
any extra leading or
trailing zeroes. Thus, the declared precision and scale of a column are
maximums, not fixed allocations. (In this sense the numeric
type is more akin to varchar(n)
than to char(n).)
The actual storage requirement is two bytes for each group of four
decimal digits, plus eight bytes overhead.
However, in practice:
create table test(f1 numeric);
insert into test(f1)values(15.000);
select * from test;
f1
---
15.000






Re: [GENERAL] Free Tool to design Postgres Databases

2009-10-20 Thread Grzegorz Jaśkiewicz
On Tue, Oct 20, 2009 at 11:27 AM, Wolfgang Keller  wrote:

> > I'am searching for a free tool to design a Postgre Database. There is
> > something similar to MySQL Workbench?
>
> http://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools
>
> I've tried out nearly every (affordable or free) ER tool that's available
> and so far the one I like the most is DB-Main (
> http://www.db-main.com/doc.php?id=to1&lang=2). Which is not mentioned on
> the page above.
>
> which btw, is a nice program, written in java, but no linux/mac osx
version/package available. what a shame.


-- 
GJ


Re: [GENERAL] Free Tool to design Postgres Databases

2009-10-20 Thread Wolfgang Keller
> I'am searching for a free tool to design a Postgre Database. There is
> something similar to MySQL Workbench?

http://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools

I've tried out nearly every (affordable or free) ER tool that's available and 
so far the one I like the most is DB-Main 
(http://www.db-main.com/doc.php?id=to1&lang=2). Which is not mentioned on the 
page above.

Sincerely,

Wolfgang

-- 
NO "Courtesy Copies" PLEASE!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Free Tool to design Postgres Databases

2009-10-20 Thread Mirko Sertic
Hi

You can also try Mogwai ERDesignerNG. Free and Open Source.

Regards
Mirko

> -Ursprüngliche Nachricht-
> Von: "Peter Hunsberger" 
> Gesendet: 19.10.09 22:52:50
> An: Andre Lopes 
> CC: pgsql-general@postgresql.org
> Betreff: Re: [GENERAL] Free Tool to design Postgres Databases


> On Mon, Oct 19, 2009 at 2:18 PM, Andre Lopes  wrote:
> > Hi,
> >
> > I'am searching for a free tool to design a Postgre Database. There is
> > something similar to MySQL Workbench?
> 
> Search the archives this came up within the last couple of months.  I
> currently use Power Architect, it's a beta product and still fairly
> buggy but works.
> 
> -- 
> Peter Hunsberger
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] different execution times of the same query

2009-10-20 Thread Scott Marlowe
On Tue, Oct 20, 2009 at 2:34 AM, Luca Ferrari  wrote:
> Hi all,
> I'm testing a proprietary driver that connects my old applications to a
> postgresql database. The problem is that I've got very strange issues about
> execution times. For instance, the following query:
>
> cogedb=> explain analyze SELECT *  FROM GMMOVART  WHERE DATA  >= '01/01/2006'
> AND DATA  <= '31/12/2006' ORDER BY DATA, CONTATORE, RIGA;
>                                                                   QUERY PLAN
> 
>  Sort  (cost=152440.12..152937.79 rows=199069 width=340) (actual
> time=1734.550..1827.006 rows=214730 loops=1)
>   Sort Key: data, contatore, riga
>   ->  Bitmap Heap Scan on gmmovart  (cost=6425.18..134919.15 rows=199069
> width=340) (actual time=135.161..721.679 rows=214730 loops=1)
>         Recheck Cond: ((data >= '2006-01-01'::date) AND (data <=
> '2006-12-31'::date))
>         ->  Bitmap Index Scan on gmmovart_index03  (cost=0.00..6375.42
> rows=199069 width=0) (actual time=128.400..128.400 rows=214730 loops=1)
>               Index Cond: ((data >= '2006-01-01'::date) AND (data <=
> '2006-12-31'::date))
>  Total runtime: 1893.026 ms
> (7 rows)
>
>
> Executes in 1,8 seconds. Now, the same query launched thru the driver produces
> a log with the following entry:
>
> cogedb LOG:  duration: 5265.103 ms  statement:  SELECT *  FROM GMMOVART  WHERE
> DATA  >= '01/01/2006' AND DATA  <= '31/12/2006' ORDER BY DATA, CONTATORE, RIGA
>
> with a duration of 5,2 seconds, that is 3+ times longer than the query run in
> the psql prompt! Please note that the query is always executed locally.

Two things.  1: Actually running the query and receiving the results
isn't the same as just running it and throwing them away (what explain
analyze does) and 2: The query may be getting cached in psql if you're
running it more than once, but it may not run often enough on that
data set to get the same caching each time.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?

2009-10-20 Thread Scott Marlowe
On Tue, Oct 20, 2009 at 1:11 AM, Ow Mun Heng  wrote:
> Sorry guys, I know this is very off-track for this list, but google hasn't
> been of much help. This is my raid array on which my PG data resides.
>
> I have a 4 disk Raid10 array running on linux MD raid.
> Sda / sdb / sdc / sdd
>
> One fine day, 2 of the drives just suddenly decide to die on me. (sda and
> sdd)
>
> I've tried multiple methods to try to determine if I can get them back
> online.
>
> 1) replace sda w/ fresh drive and resync - Failed
> 2) replace sdd w/ fresh drive and resync - Failed
> 3) replace sda w/ fresh drive but keeping existing sdd and resync - Failed
> 4) replace sdd w/ fresh drive but keeping existing sda and resync - Failed
>
>
> Raid10 is supposed to be able to withstand up to 2 drive failures if the
> failures are from different sides of the mirror.
>
> Right now, I'm not sure which drive belongs to which. How do I determine
> that? Does it depend on the output of /prod/mdstat and in that order?

Is this software raid in linux?  What does

cat /proc/mdstat

say?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] different execution times of the same query

2009-10-20 Thread Luca Ferrari
Hi all,
I'm testing a proprietary driver that connects my old applications to a 
postgresql database. The problem is that I've got very strange issues about 
execution times. For instance, the following query:

cogedb=> explain analyze SELECT *  FROM GMMOVART  WHERE DATA  >= '01/01/2006' 
AND DATA  <= '31/12/2006' ORDER BY DATA, CONTATORE, RIGA;
   QUERY PLAN   






 Sort  (cost=152440.12..152937.79 rows=199069 width=340) (actual 
time=1734.550..1827.006 rows=214730 loops=1)
   Sort Key: data, contatore, riga
   ->  Bitmap Heap Scan on gmmovart  (cost=6425.18..134919.15 rows=199069 
width=340) (actual time=135.161..721.679 rows=214730 loops=1)
 Recheck Cond: ((data >= '2006-01-01'::date) AND (data <= 
'2006-12-31'::date))
 ->  Bitmap Index Scan on gmmovart_index03  (cost=0.00..6375.42 
rows=199069 width=0) (actual time=128.400..128.400 rows=214730 loops=1)
   Index Cond: ((data >= '2006-01-01'::date) AND (data <= 
'2006-12-31'::date))
 Total runtime: 1893.026 ms
(7 rows)


Executes in 1,8 seconds. Now, the same query launched thru the driver produces 
a log with the following entry:

cogedb LOG:  duration: 5265.103 ms  statement:  SELECT *  FROM GMMOVART  WHERE 
DATA  >= '01/01/2006' AND DATA  <= '31/12/2006' ORDER BY DATA, CONTATORE, RIGA

with a duration of 5,2 seconds, that is 3+ times longer than the query run in 
the psql prompt! Please note that the query is always executed locally.
Now, I don't have access to driver internals, so I don't know how it works and 
what could be the difference of time due to. Is there something I can work on 
my postgresql server in order to better investigate or to tune to shrink down 
execution times?
I suspect that the driver uses a cursor, could it be a "pause" between 
consecutive fetches that produces such time difference? Please note that I've 
tested different queries with similar results, even among database restarts (in 
order to avoid result caching).

Any help is appreciated.

cogedb=> select * from version();
   version
--
 PostgreSQL 8.2.9 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.3 
20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)
(1 row)


Thanks,
Luca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?

2009-10-20 Thread Ow Mun Heng
Sorry guys, I know this is very off-track for this list, but google hasn't
been of much help. This is my raid array on which my PG data resides.

I have a 4 disk Raid10 array running on linux MD raid. 
Sda / sdb / sdc / sdd

One fine day, 2 of the drives just suddenly decide to die on me. (sda and
sdd)

I've tried multiple methods to try to determine if I can get them back
online.

1) replace sda w/ fresh drive and resync - Failed
2) replace sdd w/ fresh drive and resync - Failed
3) replace sda w/ fresh drive but keeping existing sdd and resync - Failed
4) replace sdd w/ fresh drive but keeping existing sda and resync - Failed


Raid10 is supposed to be able to withstand up to 2 drive failures if the
failures are from different sides of the mirror.

Right now, I'm not sure which drive belongs to which. How do I determine
that? Does it depend on the output of /prod/mdstat and in that order?

Thanks


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general