Re: [GENERAL] Table Rewrite During ALTER TABLE ... ADD COLUMN ... DEFAULT NULL

2014-04-02 Thread Amit Langote
On Thu, Apr 3, 2014 at 1:19 PM, Tom Lane  wrote:
> Amit Langote  writes:
>> On Thu, Apr 3, 2014 at 12:54 PM, Tom Lane  wrote:
>>> Some experimentation suggests that we are smart about "DEFAULT NULL"
>>> unless the column type requires a length-coercion cast, in which
>>> case the default expression involves a function call, and that doesn't
>>> get elided.
>
>> Is there a warning about such behavior in the manual?
>> Is it useful to include it somewhere (not sure where though)?
>
> We could just rephrase the ALTER TABLE docs to say that the table
> rewrite is avoided if you omit the DEFAULT clause, rather than
> saying that a null default works.
>

How does the attached sound?
Wonder if a rewrite-warning is necessary?

--
Amit


alter-table-doc-fix.patch
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Rewrite During ALTER TABLE ... ADD COLUMN ... DEFAULT NULL

2014-04-02 Thread Amit Langote
On Thu, Apr 3, 2014 at 1:19 PM, Tom Lane  wrote:
> Amit Langote  writes:
>> On Thu, Apr 3, 2014 at 12:54 PM, Tom Lane  wrote:
>>> Some experimentation suggests that we are smart about "DEFAULT NULL"
>>> unless the column type requires a length-coercion cast, in which
>>> case the default expression involves a function call, and that doesn't
>>> get elided.
>
>> Is there a warning about such behavior in the manual?
>> Is it useful to include it somewhere (not sure where though)?
>
> We could just rephrase the ALTER TABLE docs to say that the table
> rewrite is avoided if you omit the DEFAULT clause, rather than
> saying that a null default works.
>

Agreed.

--
Amit


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Rewrite During ALTER TABLE ... ADD COLUMN ... DEFAULT NULL

2014-04-02 Thread Tom Lane
Amit Langote  writes:
> On Thu, Apr 3, 2014 at 12:54 PM, Tom Lane  wrote:
>> Some experimentation suggests that we are smart about "DEFAULT NULL"
>> unless the column type requires a length-coercion cast, in which
>> case the default expression involves a function call, and that doesn't
>> get elided.

> Is there a warning about such behavior in the manual?
> Is it useful to include it somewhere (not sure where though)?

We could just rephrase the ALTER TABLE docs to say that the table
rewrite is avoided if you omit the DEFAULT clause, rather than
saying that a null default works.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Rewrite During ALTER TABLE ... ADD COLUMN ... DEFAULT NULL

2014-04-02 Thread Amit Langote
On Thu, Apr 3, 2014 at 12:54 PM, Tom Lane  wrote:
> Amit Langote  writes:
>> When I do the following:
>
>> ALTER TABLE table ADD COLUMN numeric(x) DEFAULT NULL;
>
>> The table is rewritten whereas notes section on the manual page for
>> ALTER TABLE says otherwise (which holds true for most of the cases
>> though).
>
> Try it without the explicit DEFAULT clause.
>

Thanks, that does the trick.

> Some experimentation suggests that we are smart about "DEFAULT NULL"
> unless the column type requires a length-coercion cast, in which
> case the default expression involves a function call, and that doesn't
> get elided.
>

Is there a warning about such behavior in the manual?
Is it useful to include it somewhere (not sure where though)?

--
Amit


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Rewrite During ALTER TABLE ... ADD COLUMN ... DEFAULT NULL

2014-04-02 Thread Tom Lane
Amit Langote  writes:
> When I do the following:

> ALTER TABLE table ADD COLUMN numeric(x) DEFAULT NULL;

> The table is rewritten whereas notes section on the manual page for
> ALTER TABLE says otherwise (which holds true for most of the cases
> though).

Try it without the explicit DEFAULT clause.

Some experimentation suggests that we are smart about "DEFAULT NULL"
unless the column type requires a length-coercion cast, in which
case the default expression involves a function call, and that doesn't
get elided.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Table Rewrite During ALTER TABLE ... ADD COLUMN ... DEFAULT NULL

2014-04-02 Thread Amit Langote
Hi,

When I do the following:

ALTER TABLE table ADD COLUMN numeric(x) DEFAULT NULL;

The table is rewritten whereas notes section on the manual page for
ALTER TABLE says otherwise (which holds true for most of the cases
though).

http://www.postgresql.org/docs/devel/static/sql-altertable.html

As an example,

postgres=# create table test as select generate_series(1,100) as a;
SELECT 100

postgres=# select oid, relname, relfilenode from pg_class where
relname = 'test';
  oid  | relname | relfilenode
---+-+-
 16709 | test|   16709
(1 row)

postgres=# alter table test add column b numeric(2) DEFAULT NULL;
ALTER TABLE

-- rewritten
postgres=# select oid, relname, relfilenode from pg_class where
relname = 'test';
  oid  | relname | relfilenode
---+-+-
 16709 | test|   16713
(1 row)

postgres=# alter table test add column c int DEFAULT NULL;
ALTER TABLE

-- not rewritten
postgres=# select oid, relname, relfilenode from pg_class where
relname = 'test';
  oid  | relname | relfilenode
---+-+-
 16709 | test|   16713
(1 row)

postgres=# alter table test add column d char(5) DEFAULT NULL;
ALTER TABLE

-- rewritten, again
postgres=# select oid, relname, relfilenode from pg_class where
relname = 'test';
  oid  | relname | relfilenode
---+-+-
 16709 | test|   16717


So, when the type of the new column has type modifier like numeric(x),
char(x) etc. do,  this happens.

Is this intentional and/or documented somewhere else? If not, should
it be documented?

--
Amit


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Adrian Klaver

On 04/02/2014 05:30 PM, Rob Sargent wrote:

On 04/02/2014 06:06 PM, Adrian Klaver wrote:

On 04/02/2014 02:27 PM, Rob Sargent wrote:

On 04/02/2014 03:11 PM, Adrian Klaver wrote:

On 04/02/2014 02:04 PM, Rob Sargent wrote:

On 04/02/2014 02:36 PM, Adrian Klaver wrote:

On 04/02/2014 01:14 PM, Rob Sargent wrote:

On 04/02/2014 01:56 PM, Steve Atkins wrote:




Have you tried moving the input file to the same disk as the server,
to factor out the controller?


I labour under the delusion that it is through the controller one reads
and writes and that there might be some slight advantage to not doing
both against one drive if avoidable. Wrong again?


Well there is one way to find out:)

Might try with something less then the whole file to get come up an
approximate row/sec rate.






Well things slow down over time, and lots of "too frequent"s:

Have done 500 batches in 24219 ms
Have done 1000 batches in 52362 ms
Have done 1500 batches in 82256 ms
Have done 2000 batches in 113754 ms
Have done 2500 batches in 149637 ms
Have done 3000 batches in 211314 ms
Have done 3500 batches in 301989 ms
Have done 4000 batches in 430817 ms
Have done 4500 batches in 596043 ms
Have done 5000 batches in 804250 ms

where a batch is 500,000 lines.  This on the java side of course.


This is drive to drive or on single drive?









--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSD Drives

2014-04-02 Thread David Boreham


While I have two friends who work at FusionIO, and have great confidence 
in their products, we like to deploy more conventional SATA SSDs at 
present in our servers. We have been running various versions of Intel's 
enterprise and data center SSDs in production for several years now and 
couldn't be happier with their performance. The oldest in service at 
present are 710 series that have been subjected to a ~500wtps PG load 
7*24 for the past 28 months. They still show zero wearout indication in 
the SMART stats.


As others have mentioned, power-fail protection (supercap) is the thing 
to look for, and also some sort of concrete specification for drive 
write endurance unless you have made a deliberate decision to trade off 
endurance vs. cost in the context of your deployment.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Rob Sargent

On 04/02/2014 06:06 PM, Adrian Klaver wrote:

On 04/02/2014 02:27 PM, Rob Sargent wrote:

On 04/02/2014 03:11 PM, Adrian Klaver wrote:

On 04/02/2014 02:04 PM, Rob Sargent wrote:

On 04/02/2014 02:36 PM, Adrian Klaver wrote:

On 04/02/2014 01:14 PM, Rob Sargent wrote:

On 04/02/2014 01:56 PM, Steve Atkins wrote:




Have you tried moving the input file to the same disk as the server,
to factor out the controller?


I labour under the delusion that it is through the controller one reads
and writes and that there might be some slight advantage to not doing
both against one drive if avoidable. Wrong again?


Well there is one way to find out:)

Might try with something less then the whole file to get come up an 
approximate row/sec rate.







Well things slow down over time, and lots of "too frequent"s:

   Have done 500 batches in 24219 ms
   Have done 1000 batches in 52362 ms
   Have done 1500 batches in 82256 ms
   Have done 2000 batches in 113754 ms
   Have done 2500 batches in 149637 ms
   Have done 3000 batches in 211314 ms
   Have done 3500 batches in 301989 ms
   Have done 4000 batches in 430817 ms
   Have done 4500 batches in 596043 ms
   Have done 5000 batches in 804250 ms

where a batch is 500,000 lines.  This on the java side of course.






[GENERAL] table insert/primary key question

2014-04-02 Thread Scot Kreienkamp
Hey everyone,

I have a table like so:

Receiptlimitid: BIGINT (Primary Key)
Profitcenterid: BIGINT
Receiptnumber: INTEGER

All are set to Not Null also.


My question is, if I have an insert that goes idle in transaction for a while 
before it commits, will it stop all other inserts from happening on the table?  
If so, is that because of the possible violation of the primary key by the 
following pending inserts?  Just trying to confirm the behavior I'm seeing and 
make sure I understand what's going on.

Thanks!


Scot Kreienkamp | Senior Systems Engineer | La-Z-Boy Incorporated ®
1284 North Telegraph Road | Monroe, Michigan 48162 |   Phone: 734-384-6403  |   
 Fax:  |   Email: skre...@la-z-boy.com
www.la-z-boy.com |  facebook.com/lazboy 
| twitter.com/lazboy 
| youtube.com/lazboy 
|




This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.


Re: [GENERAL] SSD Drives

2014-04-02 Thread Guy Rouillier
We used 4x OCZ Deneva 2 in a RAID configuration.  Worked well for us for 
over 2 years with no hardware issues.  We switched to SSD because we had 
a very write-intensive application (30 million rows/day) that spinning 
disks just couldn't keep up with.


On 4/2/2014 6:09 PM, Shaun Thomas wrote:

On 04/02/2014 04:55 PM, Bret Stern wrote:


Care to share the SSD hardware you're using?


We use these:

http://www.fusionio.com/products/iodrive2/

The older versions of these cards can read faster than a RAID-10 of
80x15k RPM SAS drives, based on our tests from a couple yeas ago. Writes
aren't *quite* as fast, but still much better than even a large RAID array.

They ain't cheap, though. You can expect to pay around $15k USD per TB,
I believe. There are other similar products from other vendors which may
have different cost/performance ratios, but I can only vouch for stuff
I've personally tested.

Our adventure with these cards was a presentation at Postgres Open in
2011. Slides are here:

https://wiki.postgresql.org/images/c/c5/Nvram_fun_profit.pdf




--
Guy Rouillier

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Rob Sargent

On 04/02/2014 04:36 PM, Jeff Janes wrote:
On Wed, Apr 2, 2014 at 12:37 PM, Rob Sargent > wrote:


I'm playing with various data models to compare performance and
practicalities and not sure if I should be surprised by the
numbers I'm getting. I hope this report isn't too wishy-washy for
reasoned comment.

One model says a genotype is defined as follows:

 Table "public.oldstyle"
+-+--+---+
|   Column| Type | Modifiers |
+-+--+---+
| id  | uuid | not null  |
| sample_name | text | not null  |
| marker_name | text | not null  |
| allele1 | character(1) |   |
| allele2 | character(1) |   |
+-+--+---+
(0. id is a Primary Key)
(1. Take what you will from the table name.)
(2. I hadn't thought of "char" type at this point)
(3. Ultimately the names would become ids, RI included)
(4. We're loading 39 samples and ~950K markers)

I loaded 37M+ records using jOOQ (batching every 1000 lines) in
12+ hours (800+ records/sec).  Then I tried COPY and killed that
after 11.25 hours when I realised that I had added on non-unque
index on the name fields after the first load. By that point is
was on line 28301887, so ~0.75 done which implies it would have
take ~15hours to complete.

Would the overhead of the index likely explain this decrease in
throughput?


Absolutely.


Impatience got the better of me and I killed the second COPY. 
This time it had done 54% of the file in 6.75 hours, extrapolating

to roughly 12 hours to do the whole thing.


Are you sure you actually dropped the indices?  (And the primary key?)

I get about 375,000 lines per second with no indexes, triggers, 
constraints.


perl -le 'my $x=""; foreach(1..37e6) {$x++; print join 
"\t", "a0eebc99-9c0b-4ef8-bb6d-$x",$_,$_,"A","T"}'|time psql -c 
'truncate oldstyle; copy oldstyle from stdin;'


(More if I truncate it in the same transaction as the copy)

If you can't drop the pk constraint, can you at least generate the 
values in sort-order?


Cheers,

Jeff
I restarted the java-based loading.  Still with pk in place,  (and 
actually generating the UUIDs etc) I got ~1.1M rows in one minute. And 
one "LOG:  checkpoints are occurring too frequently (24 seconds apart)" 
with checkpoint_segment now at 6 (v. default 3). In  plotting the 
records v. time in that minute at least it's pretty linear.  The time 
per batch is relatively constant with a few hiccups. (.5 sec per 10K 
lines).  So I've improved things immensely but not entirely sure that 
the simple config change is the reason.  If this continued I would be 
done inside 40 minutes.


With copy interuptus I now get 2.9M records per minute so the load would 
take only 12 or so minutes. I did get four reports of too-frequent 
checkpoints 2 at 15 seconds 2 at 9 seconds.


I'll need to let each on go to completion.

If these numbers are at all accurate and realistic, I'm still impressed 
with jOOQ, though COPY is rightfully back to its proper place as fastest 
way to load.








Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Adrian Klaver

On 04/02/2014 02:27 PM, Rob Sargent wrote:

On 04/02/2014 03:11 PM, Adrian Klaver wrote:

On 04/02/2014 02:04 PM, Rob Sargent wrote:

On 04/02/2014 02:36 PM, Adrian Klaver wrote:

On 04/02/2014 01:14 PM, Rob Sargent wrote:

On 04/02/2014 01:56 PM, Steve Atkins wrote:




Have you tried moving the input file to the same disk as the server,
to factor out the controller?


I labour under the delusion that it is through the controller one reads
and writes and that there might be some slight advantage to not doing
both against one drive if avoidable. Wrong again?


Well there is one way to find out:)

Might try with something less then the whole file to get come up an 
approximate row/sec rate.










--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSD Drives

2014-04-02 Thread Scott Marlowe
On Wed, Apr 2, 2014 at 4:09 PM, Shaun Thomas  wrote:
> On 04/02/2014 04:55 PM, Bret Stern wrote:
>
>> Care to share the SSD hardware you're using?
>
>
> We use these:
>
> http://www.fusionio.com/products/iodrive2/
>
> The older versions of these cards can read faster than a RAID-10 of 80x15k
> RPM SAS drives, based on our tests from a couple yeas ago. Writes aren't
> *quite* as fast, but still much better than even a large RAID array.
>
> They ain't cheap, though. You can expect to pay around $15k USD per TB, I
> believe. There are other similar products from other vendors which may have
> different cost/performance ratios, but I can only vouch for stuff I've
> personally tested.
>
> Our adventure with these cards was a presentation at Postgres Open in 2011.
> Slides are here:
>
> https://wiki.postgresql.org/images/c/c5/Nvram_fun_profit.pdf
>

Where I work we use the MLC based FusionIO cards and they are quite
fast. It's actually hard to push them to their max with only 24 or 32
cores in a fast machine. My favorite thing about them is their
fantastic support.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] simple update query stuck

2014-04-02 Thread Si Chen
Ok, thanks.  I'll keep that in mind.


On Tue, Apr 1, 2014 at 7:45 PM, Andrew Sullivan  wrote:

> On Tue, Apr 01, 2014 at 07:00:16PM -0400, Tom Lane wrote:
>
> > one of the clients, in a way that isn't visible to the deadlock detector.
> > One way for that to happen without any external interconnections is if
> the
> > client is waiting for a NOTIFY that will never arrive because the
> would-be
> > sender is blocked.
>
> I bet the case I was thinking of was the NOTIFY example.  That never
> occurred to me as an explanation, but now that you mention it, it
> seems quite likely to me.
>
> More generally (and for the OP's problem), my experience is that lots
> of updates against the same rows in an unpredictable order is an
> excellent way to run into trouble, and long-running transactions are a
> major source of these problems.  Without a more detailed report about
> what is going on in the present case, I don't think it's going to be
> possible to diagnose better than has been done.
>
> A
>
> --
> Andrew Sullivan
> a...@crankycanuck.ca
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Si Chen
Open Source Strategies, Inc.
sic...@opensourcestrategies.com
http://www.OpenSourceStrategies.com
LinkedIn: http://www.linkedin.com/in/opentaps
Twitter: http://twitter.com/opentaps


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Rob Sargent

On 04/02/2014 04:36 PM, Jeff Janes wrote:
On Wed, Apr 2, 2014 at 12:37 PM, Rob Sargent > wrote:


I'm playing with various data models to compare performance and
practicalities and not sure if I should be surprised by the
numbers I'm getting. I hope this report isn't too wishy-washy for
reasoned comment.

One model says a genotype is defined as follows:

 Table "public.oldstyle"
+-+--+---+
|   Column| Type | Modifiers |
+-+--+---+
| id  | uuid | not null  |
| sample_name | text | not null  |
| marker_name | text | not null  |
| allele1 | character(1) |   |
| allele2 | character(1) |   |
+-+--+---+
(0. id is a Primary Key)
(1. Take what you will from the table name.)
(2. I hadn't thought of "char" type at this point)
(3. Ultimately the names would become ids, RI included)
(4. We're loading 39 samples and ~950K markers)

I loaded 37M+ records using jOOQ (batching every 1000 lines) in
12+ hours (800+ records/sec).  Then I tried COPY and killed that
after 11.25 hours when I realised that I had added on non-unque
index on the name fields after the first load. By that point is
was on line 28301887, so ~0.75 done which implies it would have
take ~15hours to complete.

Would the overhead of the index likely explain this decrease in
throughput?


Absolutely.


Impatience got the better of me and I killed the second COPY. 
This time it had done 54% of the file in 6.75 hours, extrapolating

to roughly 12 hours to do the whole thing.


Are you sure you actually dropped the indices?  (And the primary key?)

I get about 375,000 lines per second with no indexes, triggers, 
constraints.


perl -le 'my $x=""; foreach(1..37e6) {$x++; print join 
"\t", "a0eebc99-9c0b-4ef8-bb6d-$x",$_,$_,"A","T"}'|time psql -c 
'truncate oldstyle; copy oldstyle from stdin;'


(More if I truncate it in the same transaction as the copy)

If you can't drop the pk constraint, can you at least generate the 
values in sort-order?


Cheers,

Jeff
No I'll leave the pk in at the very least.  My example load (37M 
records) will not be the last word by any means.  That's one experiment, 
if you will.  My goal is not to see how fast I can get records in, 
rather to see what I can expect going forward. Yes, I'm pretty sure I 
dropped the index afore the second kick at copy.
I'm about restart after some config changes (doubled the 
checkpoint_segments - I have no idea what the value should be.)


Hope you'll stay tuned.



Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Jeff Janes
On Wed, Apr 2, 2014 at 12:37 PM, Rob Sargent  wrote:

>  I'm playing with various data models to compare performance and
> practicalities and not sure if I should be surprised by the numbers I'm
> getting. I hope this report isn't too wishy-washy for reasoned comment.
>
> One model says a genotype is defined as follows:
>
>  Table "public.oldstyle"
> +-+--+---+
> |   Column| Type | Modifiers |
> +-+--+---+
> | id  | uuid | not null  |
> | sample_name | text | not null  |
> | marker_name | text | not null  |
> | allele1 | character(1) |   |
> | allele2 | character(1) |   |
> +-+--+---+
> (0. id is a Primary Key)
> (1. Take what you will from the table name.)
> (2. I hadn't thought of "char" type at this point)
> (3. Ultimately the names would become ids, RI included)
> (4. We're loading 39 samples and ~950K markers)
>
> I loaded 37M+ records using jOOQ (batching every 1000 lines) in 12+ hours
> (800+ records/sec).  Then I tried COPY and killed that after 11.25 hours
> when I realised that I had added on non-unque index on the name fields
> after the first load. By that point is was on line 28301887, so ~0.75 done
> which implies it would have take ~15hours to complete.
>
> Would the overhead of the index likely explain this decrease in throughput?
>

Absolutely.


>
> Impatience got the better of me and I killed the second COPY.  This time
> it had done 54% of the file in 6.75 hours, extrapolating to roughly 12
> hours to do the whole thing.
>

Are you sure you actually dropped the indices?  (And the primary key?)

I get about 375,000 lines per second with no indexes, triggers, constraints.

perl -le 'my $x=""; foreach(1..37e6) {$x++; print join "\t",
"a0eebc99-9c0b-4ef8-bb6d-$x",$_,$_,"A","T"}'|time psql -c 'truncate
oldstyle; copy oldstyle from stdin;'

(More if I truncate it in the same transaction as the copy)

If you can't drop the pk constraint, can you at least generate the values
in sort-order?

Cheers,

Jeff


Re: [GENERAL] SSD Drives

2014-04-02 Thread Shaun Thomas

On 04/02/2014 04:55 PM, Bret Stern wrote:


Care to share the SSD hardware you're using?


We use these:

http://www.fusionio.com/products/iodrive2/

The older versions of these cards can read faster than a RAID-10 of 
80x15k RPM SAS drives, based on our tests from a couple yeas ago. Writes 
aren't *quite* as fast, but still much better than even a large RAID array.


They ain't cheap, though. You can expect to pay around $15k USD per TB, 
I believe. There are other similar products from other vendors which may 
have different cost/performance ratios, but I can only vouch for stuff 
I've personally tested.


Our adventure with these cards was a presentation at Postgres Open in 
2011. Slides are here:


https://wiki.postgresql.org/images/c/c5/Nvram_fun_profit.pdf

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq - lack of support to set the fetch size

2014-04-02 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


> Second, the feature needed to do this without even
> using a cursor was added 1.5 years ago (PQsetSingleRowMode).  The DBD::Pg
> was just not taught how to use it yet.

True. And we were hoping for something better, so we can make one request 
for 10,000 rows to libpq rather than call PQgetResult 10,000 times, but 
we'll move ahead with implementing RowCacheSize via PQsetSingleRowMode.

- -- 
Greg Sabino Mullane g...@turnstep.com
PGP Key: 0x14964AC8 201404021428
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAlM8ie8ACgkQvJuQZxSWSsit0gCgn0qMRRnep2sVUeM+BLyQoIkS
dtMAoPvM71oL+YdQg+84/xT2TxLj3wek
=GVH+
-END PGP SIGNATURE-




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Merlin Moncure
On Wed, Apr 2, 2014 at 3:01 PM, Igor Neyman  wrote:
>
>
>> -Original Message-
>> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
>> ow...@postgresql.org] On Behalf Of Victor Sterpu
>> Sent: Wednesday, April 02, 2014 2:25 PM
>> To: Victor Sterpu; Merlin Moncure
>> Cc: PostgreSQL General
>> Subject: Re: [GENERAL] Lock problem
>>
>> I'm sure is not right, but is a there a server side solution for such 
>> sitations?
>> A configuration - timeout for idle transactions.
>>
>
> I don't think PG has such configuration parameter.
> But, you could easily write a function (say in PgPlSQL) and run it on 
> schedule, where you could check "ILE IN TRANSACTDION" session and compare 
> their start_time to system time, and then based on your criteria you could 
> kill suspect session/transaction.
> But this could be dangerous;  some long-running transactions could be 
> perfectly valid.

I'd look for 'Idle In Transaction' backends that have
clock_timestamp() - state_change > x, where x is the maximum amount of
time your application does stuff between queries while in transaction.
 Generally, x should never be more than about 10 seconds or so...or if
it is, it's advisable to restructure your application so that more
preprocessing is done before grabbing the transaction initially.  In
fact, for well written applications, seeing 'idle in transaction'
should be quite exceptional.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSD Drives

2014-04-02 Thread Bret Stern
Care to share the SSD hardware you're using?

I've used none to date, and have some critical data I would like
to put on a development server to test with.

Regards,

Bret Stern

On Wed, 2014-04-02 at 15:31 -0500, Shaun Thomas wrote:
> On 04/02/2014 02:50 PM, Brent Wood wrote:
> 
> > http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html
> 
> While interesting, these results are extremely out of date compared to 
> current drives. Current chips and firmware regularly put out 2-10 times 
> better performance than even the best graphs on this page, depending on 
> what you buy.
> 
> We moved all of our performance-critical servers to NVRAM-based storage 
> years ago. For us, it was well worth the added expense.
> 
> -- 
> Shaun Thomas
> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
> 312-676-8870
> stho...@optionshouse.com
> 
> __
> 
> See http://www.peak6.com/email_disclaimer/ for terms and conditions related 
> to this email
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Alban Hertroys
On 02 Apr 2014, at 20:13, Victor Sterpu  wrote:

> There may pass a few days or weeks until next lock.
> But I don't  undesrtand why the whole table is locked if there is one 
> uncommited transaction.
> 
> The following scenario might be the cause:
> 1. Transaction is started
> 2. the client application is closed because of a power surge, the started 
> transaction will never be commited
> 3. from the server point of view there is a unfinished transaction that will 
> block future statements
> 
> Is this normal behaviour?

I’m pretty sure that with such a powersurge the connection gets closed, causing 
the transaction to roll back.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Rob Sargent

On 04/02/2014 03:11 PM, Adrian Klaver wrote:

On 04/02/2014 02:04 PM, Rob Sargent wrote:

On 04/02/2014 02:36 PM, Adrian Klaver wrote:

On 04/02/2014 01:14 PM, Rob Sargent wrote:

On 04/02/2014 01:56 PM, Steve Atkins wrote:



Well indeed there are copious LOG/HINT pairs along the lines of

LOG:  checkpoints are occurring too frequently ([8 <= n <=29]
seconds apart)
HINT:  Consider increasing the configuration parameter
"checkpoint_segments".

and these are during non-load periods.  During the COPYs I see
uncountable numbers of

WARNING:  pgstat wait timeout

As you guys have probably already concluded, I have some config/tuning
to do.


At a guess some of the problem comes from here:

"The input file and postgres data are on separate disks, but only one 
controller."


Have you tried moving the input file to the same disk as the server, 
to factor out the controller?


I labour under the delusion that it is through the controller one reads 
and writes and that there might be some slight advantage to not doing 
both against one drive if avoidable. Wrong again?





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Adrian Klaver

On 04/02/2014 02:04 PM, Rob Sargent wrote:

On 04/02/2014 02:36 PM, Adrian Klaver wrote:

On 04/02/2014 01:14 PM, Rob Sargent wrote:

On 04/02/2014 01:56 PM, Steve Atkins wrote:



Well indeed there are copious LOG/HINT pairs along the lines of

LOG:  checkpoints are occurring too frequently ([8 <= n <=29]
seconds apart)
HINT:  Consider increasing the configuration parameter
"checkpoint_segments".

and these are during non-load periods.  During the COPYs I see
uncountable numbers of

WARNING:  pgstat wait timeout

As you guys have probably already concluded, I have some config/tuning
to do.


At a guess some of the problem comes from here:

"The input file and postgres data are on separate disks, but only one 
controller."


Have you tried moving the input file to the same disk as the server, to 
factor out the controller?




Cheers,
rjs






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Rob Sargent

On 04/02/2014 02:36 PM, Adrian Klaver wrote:

On 04/02/2014 01:14 PM, Rob Sargent wrote:

On 04/02/2014 01:56 PM, Steve Atkins wrote:


Impatience got the better of me and I killed the second COPY.  This 
time it had done 54% of the file in 6.75 hours, extrapolating to 
roughly 12 hours to do the whole thing.
That seems rather painfully slow. How exactly are you doing the bulk 
load? Are you CPU limited or disk limited?


Have you 
readhttp://www.postgresql.org/docs/current/interactive/populate.html ?


Cheers,
   Steve


The copy command was pretty vanilla:

copy oldstyle from '/export/home/rob/share/testload/'
with delimiter ' ';



I've been to that page, but (as I read them) none sticks out as a sure
thing.  I'm not so worried about the actual performance as I am with the
relative throughput (sixes so far).


Have you looked at the Postgres logs from that time period to see if 
there is anything of interest, say complaining about checkpoints.




I'm not cpu bound, but I confess I didn't look at io stats during the
copy runs. I just assume it was pegged :)

Thanks,




Well indeed there are copious LOG/HINT pairs along the lines of

   LOG:  checkpoints are occurring too frequently ([8 <= n <=29]
   seconds apart)
   HINT:  Consider increasing the configuration parameter
   "checkpoint_segments".

and these are during non-load periods.  During the COPYs I see 
uncountable numbers of


   WARNING:  pgstat wait timeout

As you guys have probably already concluded, I have some config/tuning 
to do.


Cheers,
rjs





Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Steve Atkins

On Apr 2, 2014, at 1:14 PM, Rob Sargent  wrote:

> On 04/02/2014 01:56 PM, Steve Atkins wrote:
>> On Apr 2, 2014, at 12:37 PM, Rob Sargent 
>>  wrote:
>> 
>>> 
>>> Impatience got the better of me and I killed the second COPY.  This time it 
>>> had done 54% of the file in 6.75 hours, extrapolating to roughly 12 hours 
>>> to do the whole thing.
>>> 
>> That seems rather painfully slow. How exactly are you doing the bulk load? 
>> Are you CPU limited or disk limited?
>> 
>> Have you read 
>> http://www.postgresql.org/docs/current/interactive/populate.html
>>  ?
>> 
>> Cheers,
>>   Steve
>> 
>> 
> The copy command was pretty vanilla:
> copy oldstyle from '/export/home/rob/share/testload/' with 
> delimiter ' ';
> I've been to that page, but (as I read them) none sticks out as a sure thing. 
>  I'm not so worried about the actual performance as I am with the relative 
> throughput (sixes so far).
> 
> I'm not cpu bound, but I confess I didn't look at io stats during the copy 
> runs. I just assume it was pegged :)

If each row is, say, 100 bytes including the per-row overhead (plausible for a 
uuid and a couple of strings), and you're inserting 800 rows a second, that's 
80k/second, which would be fairly pathetic.

On my laptop (which has an SSD, sure, but it's still a laptop) I can insert 40M 
rows of data that has a few integers and a few small strings in about 52 
seconds. And that's just using a simple, single-threaded load using psql to run 
copy from stdin, reading from the same disk as the DB is on, with no tuning of 
any parameters to speed up the load.

12 hours suggests there's something fairly badly wrong with what you're doing. 
I'd definitely look at the server logs, check system load and double check what 
you're actually running.

(Running the same thing on a tiny VM, one that shares a single RAID5 of 7200rpm 
drives with about 40 other VMs, takes a shade under two minutes, mostly CPU 
bound).

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Bill Moran

Just for a comparison ... I wrote a Java program  that copies data from
MySQL -> Postgres, using the Copy impelmentation in the JDBC driver.  I've
occasionally seen 50,000+ rows/sec from this program, but the speed is
highly dependent on the table structure.  Tables that are very wide tend
to run slower (for example).

On Wed, 02 Apr 2014 13:36:27 -0700
Adrian Klaver  wrote:

> On 04/02/2014 01:14 PM, Rob Sargent wrote:
> > On 04/02/2014 01:56 PM, Steve Atkins wrote:
> >> On Apr 2, 2014, at 12:37 PM, Rob Sargent  wrote:
> >>
> >>> I'm playing with various data models to compare performance and 
> >>> practicalities and not sure if I should be surprised by the numbers I'm 
> >>> getting. I hope this report isn't too wishy-washy for reasoned comment.
> >>>
> >>> One model says a genotype is defined as follows:
> >>>   Table "public.oldstyle"
> >>> +-+--+---+
> >>> |   Column| Type | Modifiers |
> >>> +-+--+---+
> >>> | id  | uuid | not null  |
> >>> | sample_name | text | not null  |
> >>> | marker_name | text | not null  |
> >>> | allele1 | character(1) |   |
> >>> | allele2 | character(1) |   |
> >>> +-+--+---+
> >>> (0. id is a Primary Key)
> >>> (1. Take what you will from the table name.)
> >>> (2. I hadn't thought of "char" type at this point)
> >>> (3. Ultimately the names would become ids, RI included)
> >>> (4. We're loading 39 samples and ~950K markers)
> >>> I loaded 37M+ records using jOOQ (batching every 1000 lines) in 12+ hours 
> >>> (800+ records/sec).  Then I tried COPY and killed that after 11.25 hours 
> >>> when I realised that I had added on non-unque index on the name fields 
> >>> after the first load. By that point is was on line 28301887, so ~0.75 
> >>> done which implies it would have take ~15hours to complete.
> >>>
> >>> Would the overhead of the index likely explain this decrease in 
> >>> throughput?
> >>>
> >>> Impatience got the better of me and I killed the second COPY.  This time 
> >>> it had done 54% of the file in 6.75 hours, extrapolating to roughly 12 
> >>> hours to do the whole thing.
> >> That seems rather painfully slow. How exactly are you doing the bulk load? 
> >> Are you CPU limited or disk limited?
> >>
> >> Have you 
> >> readhttp://www.postgresql.org/docs/current/interactive/populate.html  ?
> >>
> >> Cheers,
> >>Steve
> >>
> > The copy command was pretty vanilla:
> >
> > copy oldstyle from '/export/home/rob/share/testload/'
> > with delimiter ' ';
> >
> > I've been to that page, but (as I read them) none sticks out as a sure
> > thing.  I'm not so worried about the actual performance as I am with the
> > relative throughput (sixes so far).
> 
> Have you looked at the Postgres logs from that time period to see if 
> there is anything of interest, say complaining about checkpoints.
> 
> >
> > I'm not cpu bound, but I confess I didn't look at io stats during the
> > copy runs. I just assume it was pegged :)
> >
> > Thanks,
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general


-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bug in user pg_ident.conf mapping code?

2014-04-02 Thread Shaun Thomas

On 04/02/2014 03:42 PM, Adrian Klaver wrote:


If you want a sort of hyphen.


I know. Unfortunately, the user in question gets his name from our 
Active Directory system. We have PAM set up to maintain local users 
based on the username assigned there, so the hyphen is stuck.


Trust me, we lament about the situation regularly. ;)

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bug in user pg_ident.conf mapping code?

2014-04-02 Thread Adrian Klaver

On 04/02/2014 01:35 PM, Shaun Thomas wrote:

On 04/02/2014 03:27 PM, Adrian Klaver wrote:


Who are you connecting as dude-guy or dudeguy?

Looks to me like:

psql -d some_db -U dude-guy


You are correct. I was assuming the translation went system ->
PostgreSQL, not the other way around. Like, by setting -U dude-guy,
PostgreSQL checked peer auth, then applied the mapping to find and apply
the database username.

Not sure why I had it backwards, but thanks. :)


FYI this works:

postgres@test=# CREATE ROLE dude_guy;
CREATE ROLE


If you want a sort of hyphen.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bug in user pg_ident.conf mapping code?

2014-04-02 Thread Shaun Thomas

On 04/02/2014 03:27 PM, Adrian Klaver wrote:


Who are you connecting as dude-guy or dudeguy?

Looks to me like:

psql -d some_db -U dude-guy


You are correct. I was assuming the translation went system -> 
PostgreSQL, not the other way around. Like, by setting -U dude-guy, 
PostgreSQL checked peer auth, then applied the mapping to find and apply 
the database username.


Not sure why I had it backwards, but thanks. :)

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Adrian Klaver

On 04/02/2014 01:14 PM, Rob Sargent wrote:

On 04/02/2014 01:56 PM, Steve Atkins wrote:

On Apr 2, 2014, at 12:37 PM, Rob Sargent  wrote:


I'm playing with various data models to compare performance and practicalities 
and not sure if I should be surprised by the numbers I'm getting. I hope this 
report isn't too wishy-washy for reasoned comment.

One model says a genotype is defined as follows:
  Table "public.oldstyle"
+-+--+---+
|   Column| Type | Modifiers |
+-+--+---+
| id  | uuid | not null  |
| sample_name | text | not null  |
| marker_name | text | not null  |
| allele1 | character(1) |   |
| allele2 | character(1) |   |
+-+--+---+
(0. id is a Primary Key)
(1. Take what you will from the table name.)
(2. I hadn't thought of "char" type at this point)
(3. Ultimately the names would become ids, RI included)
(4. We're loading 39 samples and ~950K markers)
I loaded 37M+ records using jOOQ (batching every 1000 lines) in 12+ hours (800+ 
records/sec).  Then I tried COPY and killed that after 11.25 hours when I 
realised that I had added on non-unque index on the name fields after the first 
load. By that point is was on line 28301887, so ~0.75 done which implies it 
would have take ~15hours to complete.

Would the overhead of the index likely explain this decrease in throughput?

Impatience got the better of me and I killed the second COPY.  This time it had 
done 54% of the file in 6.75 hours, extrapolating to roughly 12 hours to do the 
whole thing.

That seems rather painfully slow. How exactly are you doing the bulk load? Are 
you CPU limited or disk limited?

Have you readhttp://www.postgresql.org/docs/current/interactive/populate.html  ?

Cheers,
   Steve


The copy command was pretty vanilla:

copy oldstyle from '/export/home/rob/share/testload/'
with delimiter ' ';

I've been to that page, but (as I read them) none sticks out as a sure
thing.  I'm not so worried about the actual performance as I am with the
relative throughput (sixes so far).


Have you looked at the Postgres logs from that time period to see if 
there is anything of interest, say complaining about checkpoints.




I'm not cpu bound, but I confess I didn't look at io stats during the
copy runs. I just assume it was pegged :)

Thanks,



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSD Drives

2014-04-02 Thread Shaun Thomas

On 04/02/2014 02:50 PM, Brent Wood wrote:


http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html


While interesting, these results are extremely out of date compared to 
current drives. Current chips and firmware regularly put out 2-10 times 
better performance than even the best graphs on this page, depending on 
what you buy.


We moved all of our performance-critical servers to NVRAM-based storage 
years ago. For us, it was well worth the added expense.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Bug in user pg_ident.conf mapping code?

2014-04-02 Thread Adrian Klaver

On 04/02/2014 10:01 AM, Shaun Thomas wrote:

Hey,

So, I've gotten this on 9.3.4 after having a complaint from a user with
a hyphen in his name, but I dug further.

This is at the top of my pg_hba.conf file:

localallallpeer map=local

And this is my pg_ident.conf:

localdude-guy   dudeguy
local/(.*)$ \1

I've reloaded configs, and even went as far as to restart the database.
Yet the errors I get in the logs say this:

2014-04-02 10:39:59 CDT|LOG:  no match in usermap "local" for user
"dude-guy" authenticated as "dude-guy"
2014-04-02 10:39:59 CDT|FATAL:  Peer authentication failed for user
"dude-guy"
2014-04-02 10:39:59 CDT|DETAIL:  Connection matched pg_hba.conf line 84:
"local all all peer map=local"

What's going on, here? Shouldn't it be checking for peer auth before
applying the mapping? Am I missing something?




Who are you connecting as dude-guy or dudeguy?

Looks to me like:

psql -d some_db -U dude-guy


From here:

http://www.postgresql.org/docs/9.3/interactive/auth-username-maps.html

Example 19.2

If I am following you are restricting dude-guy to logging in as dudeguy.






--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSD Drives

2014-04-02 Thread Brent Wood
have you seen this?

http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html


Brent Wood

Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529 | 301 Evans Bay Parade, Greta Point, Wellington | 
www.niwa.co.nz
[NIWA]

From: pgsql-general-ow...@postgresql.org [pgsql-general-ow...@postgresql.org] 
on behalf of Bret Stern [bret_st...@machinemanagement.com]
Sent: Thursday, April 3, 2014 8:37 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] SSD Drives

Any opinions/comments on using SSD drives with postgresql?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



<>

Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Rob Sargent

On 04/02/2014 01:56 PM, Steve Atkins wrote:

On Apr 2, 2014, at 12:37 PM, Rob Sargent  wrote:


I'm playing with various data models to compare performance and practicalities 
and not sure if I should be surprised by the numbers I'm getting. I hope this 
report isn't too wishy-washy for reasoned comment.

One model says a genotype is defined as follows:
  Table "public.oldstyle"
+-+--+---+
|   Column| Type | Modifiers |
+-+--+---+
| id  | uuid | not null  |
| sample_name | text | not null  |
| marker_name | text | not null  |
| allele1 | character(1) |   |
| allele2 | character(1) |   |
+-+--+---+
(0. id is a Primary Key)
(1. Take what you will from the table name.)
(2. I hadn't thought of "char" type at this point)
(3. Ultimately the names would become ids, RI included)
(4. We're loading 39 samples and ~950K markers)
I loaded 37M+ records using jOOQ (batching every 1000 lines) in 12+ hours (800+ 
records/sec).  Then I tried COPY and killed that after 11.25 hours when I 
realised that I had added on non-unque index on the name fields after the first 
load. By that point is was on line 28301887, so ~0.75 done which implies it 
would have take ~15hours to complete.

Would the overhead of the index likely explain this decrease in throughput?

Impatience got the better of me and I killed the second COPY.  This time it had 
done 54% of the file in 6.75 hours, extrapolating to roughly 12 hours to do the 
whole thing.

That seems rather painfully slow. How exactly are you doing the bulk load? Are 
you CPU limited or disk limited?

Have you read http://www.postgresql.org/docs/current/interactive/populate.html ?

Cheers,
   Steve


The copy command was pretty vanilla:

   copy oldstyle from '/export/home/rob/share/testload/'
   with delimiter ' ';

I've been to that page, but (as I read them) none sticks out as a sure 
thing.  I'm not so worried about the actual performance as I am with the 
relative throughput (sixes so far).


I'm not cpu bound, but I confess I didn't look at io stats during the 
copy runs. I just assume it was pegged :)


Thanks,


Re: [GENERAL] SSD Drives

2014-04-02 Thread Shaun Thomas

On 04/02/2014 02:37 PM, Bret Stern wrote:


Any opinions/comments on using SSD drives with postgresql?


Using SSDs with PostgreSQL is fine, provided they have an onboard 
capacitor to ensure data integrity. The main concern with SSD drives, is 
that they essentially lie about their sync status. There is an inherent 
race-condition between the time data reaches the drive, and how long it 
takes for the write balancing and NVRAM commit overhead.


Most common drives only have a volatile RAM chip that acts as a buffer 
space while writes are synced to the physical drive. Without a capacitor 
backing, the state of this buffer is erased on power loss, resulting in 
a corrupt database.


There are upcoming technologies which may solve this (see ReRAM) but for 
now, it's a requirement for any sane system.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Igor Neyman


> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Victor Sterpu
> Sent: Wednesday, April 02, 2014 2:25 PM
> To: Victor Sterpu; Merlin Moncure
> Cc: PostgreSQL General
> Subject: Re: [GENERAL] Lock problem
> 
> I'm sure is not right, but is a there a server side solution for such 
> sitations?
> A configuration - timeout for idle transactions.
> 

I don't think PG has such configuration parameter.
But, you could easily write a function (say in PgPlSQL) and run it on schedule, 
where you could check "IDLE IN TRANSACTION" session and compare their 
start_time to system time, and then based on your criteria you could kill 
suspect session/transaction.
But this could be dangerous;  some long-running transactions could be perfectly 
valid.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] COPY v. java performance comparison

2014-04-02 Thread Steve Atkins

On Apr 2, 2014, at 12:37 PM, Rob Sargent  wrote:

> I'm playing with various data models to compare performance and 
> practicalities and not sure if I should be surprised by the numbers I'm 
> getting. I hope this report isn't too wishy-washy for reasoned comment.
> 
> One model says a genotype is defined as follows:
>  Table "public.oldstyle"
> +-+--+---+
> |   Column| Type | Modifiers |
> +-+--+---+
> | id  | uuid | not null  |
> | sample_name | text | not null  |
> | marker_name | text | not null  |
> | allele1 | character(1) |   |
> | allele2 | character(1) |   |
> +-+--+---+
> (0. id is a Primary Key)
> (1. Take what you will from the table name.)
> (2. I hadn't thought of "char" type at this point)
> (3. Ultimately the names would become ids, RI included)
> (4. We're loading 39 samples and ~950K markers)
> I loaded 37M+ records using jOOQ (batching every 1000 lines) in 12+ hours 
> (800+ records/sec).  Then I tried COPY and killed that after 11.25 hours when 
> I realised that I had added on non-unque index on the name fields after the 
> first load. By that point is was on line 28301887, so ~0.75 done which 
> implies it would have take ~15hours to complete.
> 
> Would the overhead of the index likely explain this decrease in throughput?
> 
> Impatience got the better of me and I killed the second COPY.  This time it 
> had done 54% of the file in 6.75 hours, extrapolating to roughly 12 hours to 
> do the whole thing.

That seems rather painfully slow. How exactly are you doing the bulk load? Are 
you CPU limited or disk limited?

Have you read http://www.postgresql.org/docs/current/interactive/populate.html ?

Cheers,
  Steve





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_stat_activity

2014-04-02 Thread David Johnston
Jeff Janes wrote
> On Wed, Apr 2, 2014 at 12:00 PM, Bala Venkat <

> akpgeek@

> > wrote:
> 
>> We are using postgres 9.0.  When I looked at the pg_stat_activity table.
>>
>> I have some rows where there is difference of 2 hours between
>> backend_start  and xact_start
>>
>> But there is only few milli seconds between xact_start and query_start.
>>
>> All them have wait as false.
>>
>> My question is,  is the query still executing and also why would there be
>> that much time difference between backend_start  and xact_start
>>
> 
> Say I connect to the database, and then I go to lunch for 2 hours.  Then I
> come back and start a transaction.
> 
> Or, I connect to the database and run a two-hour query.  Then
> rollback/commit that, and I start another transaction.
> 
> To know the state of the query, look at the "current_query"  field.  (In
> newer versions, look in "state" field, but that is not in 9.0)
> 
> Cheers,
> 
> Jeff

Or more commonly (I think at least) connection pools that keep connections
open.  Not sure but I don't think a session reset clears the back-end time
so this would be the observed behavior.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/pg-stat-activity-tp5798382p5798390.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] COPY v. java performance comparison

2014-04-02 Thread Rob Sargent
I'm playing with various data models to compare performance and 
practicalities and not sure if I should be surprised by the numbers I'm 
getting. I hope this report isn't too wishy-washy for reasoned comment.


One model says a genotype is defined as follows:

 Table "public.oldstyle"
   +-+--+---+
   |   Column| Type | Modifiers |
   +-+--+---+
   | id  | uuid | not null  |
   | sample_name | text | not null  |
   | marker_name | text | not null  |
   | allele1 | character(1) |   |
   | allele2 | character(1) |   |
   +-+--+---+
   (0. id is a Primary Key)
   (1. Take what you will from the table name.)
   (2. I hadn't thought of "char" type at this point)
   (3. Ultimately the names would become ids, RI included)
   (4. We're loading 39 samples and ~950K markers)

I loaded 37M+ records using jOOQ (batching every 1000 lines) in 12+ 
hours (800+ records/sec).  Then I tried COPY and killed that after 11.25 
hours when I realised that I had added on non-unque index on the name 
fields after the first load. By that point is was on line 28301887, so 
~0.75 done which implies it would have take ~15hours to complete.


Would the overhead of the index likely explain this decrease in throughput?

Impatience got the better of me and I killed the second COPY.  This time 
it had done 54% of the file in 6.75 hours, extrapolating to roughly 12 
hours to do the whole thing.


That matches up with the java speed. Not sure if I should be elated with 
jOOQ or disappointed with COPY.


Btw, I can load the roughly the same data in to the model below in 10.5 
seconds. It only adds 39 very wide lines.  I haven't got to the 
practicality bits yet :)


   Table "public.chipcall"
   +-+--+---+
   |   Column| Type | Modifiers |
   +-+--+---+
   | id  | uuid |   |
   | sample_name | text |   |
   | chip_model  | uuid |   |
   | gta | text |   |
   +-+--+---+


This just a dev desktop environment:
RHEL 6.5
PostgreSQL 9.3.1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 
4.4.7 20120313 (Red Hat 4.4.7-4 [hm, interesting contradiction]), 64-bit

2 QuadCore cpu MHz: 2925.878
The input file and postgres data are on separate disks, but only one 
controller.


Thanks in advance, even if you only read this far.



[GENERAL] SSD Drives

2014-04-02 Thread Bret Stern
Any opinions/comments on using SSD drives with postgresql?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_stat_activity

2014-04-02 Thread Jeff Janes
On Wed, Apr 2, 2014 at 12:00 PM, Bala Venkat  wrote:

> We are using postgres 9.0.  When I looked at the pg_stat_activity table.
>
> I have some rows where there is difference of 2 hours between
> backend_start  and xact_start
>
> But there is only few milli seconds between xact_start and query_start.
>
> All them have wait as false.
>
> My question is,  is the query still executing and also why would there be
> that much time difference between backend_start  and xact_start
>

Say I connect to the database, and then I go to lunch for 2 hours.  Then I
come back and start a transaction.

Or, I connect to the database and run a two-hour query.  Then
rollback/commit that, and I start another transaction.

To know the state of the query, look at the "current_query"  field.  (In
newer versions, look in "state" field, but that is not in 9.0)

Cheers,

Jeff


Re: [GENERAL] Lock problem

2014-04-02 Thread Victor Sterpu
I'm sure is not right, but is a there a server side solution for such 
sitations?

A configuration - timeout for idle transactions.

-- Original Message --
From: "Victor Sterpu" 
To: "Victor Sterpu" ; "Merlin Moncure" 


Cc: "PostgreSQL General" 
Sent: 4/2/2014 9:13:22 PM
Subject: Re[2]: [GENERAL] Lock problem


There may pass a few days or weeks until next lock.
But I don't undesrtand why the whole table is locked if there is one 
uncommited transaction.


The following scenario might be the cause:
1. Transaction is started
2. the client application is closed because of a power surge, the 
started transaction will never be commited
3. from the server point of view there is a unfinished transaction that 
will block future statements


Is this normal behaviour?


-- Original Message --
From: "Victor Sterpu" 
To: "Merlin Moncure" 
Cc: "PostgreSQL General" 
Sent: 4/2/2014 7:19:06 PM
Subject: Re: [GENERAL] Lock problem

I followed all your advice and it is obiuos that this log will show 
exactly what I need to debug the situation.

Great tip, thank you.

-- Original Message --
From: "Merlin Moncure" 
To: "Victor Sterpu" 
Cc: "PostgreSQL General" 
Sent: 4/2/2014 7:08:08 PM
Subject: Re: [GENERAL] Lock problem

On Wed, Apr 2, 2014 at 11:00 AM, Victor Sterpu  
wrote:

 All my transactions have commit or rollback.


Well, you have to verify that. There's a couple of ways to do it.
Probably the most direct is to:
1. Make sure database is logging pids in log_line_prefix (this is a
good idea all around)
2. turn on all statement logging (be advised: this can eat a lot of
log space and slow down the server).

Those two changes do not require a restart. A pg_ctl reload should
be sufficient.

Once you can do that, you should be able to locate database sessions
per pg_stat_activity that are 'idle in transaction' for a long time
without activity (anything over a second or so should be suspicious).
That will give the pid which you can then use to grep through the
statement log.

Common culprits are:
*) Dubious connection pooling solutions (php pconnect comes to mind)
*) Bad error handling logic in application (say, badly handled thrown 
exception)


merlin




-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Victor Sterpu

There may pass a few days or weeks until next lock.
But I don't  undesrtand why the whole table is locked if there is one 
uncommited transaction.


The following scenario might be the cause:
1. Transaction is started
2. the client application is closed because of a power surge, the 
started transaction will never be commited
3. from the server point of view there is a unfinished transaction that 
will block future statements


Is this normal behaviour?


-- Original Message --
From: "Victor Sterpu" 
To: "Merlin Moncure" 
Cc: "PostgreSQL General" 
Sent: 4/2/2014 7:19:06 PM
Subject: Re: [GENERAL] Lock problem

I followed all your advice and it is obiuos that this log will show 
exactly what I need to debug the situation.

Great tip, thank you.

-- Original Message --
From: "Merlin Moncure" 
To: "Victor Sterpu" 
Cc: "PostgreSQL General" 
Sent: 4/2/2014 7:08:08 PM
Subject: Re: [GENERAL] Lock problem

On Wed, Apr 2, 2014 at 11:00 AM, Victor Sterpu  
wrote:

 All my transactions have commit or rollback.


Well, you have to verify that. There's a couple of ways to do it.
Probably the most direct is to:
1. Make sure database is logging pids in log_line_prefix (this is a
good idea all around)
2. turn on all statement logging (be advised: this can eat a lot of
log space and slow down the server).

Those two changes do not require a restart. A pg_ctl reload should
be sufficient.

Once you can do that, you should be able to locate database sessions
per pg_stat_activity that are 'idle in transaction' for a long time
without activity (anything over a second or so should be suspicious).
That will give the pid which you can then use to grep through the
statement log.

Common culprits are:
*) Dubious connection pooling solutions (php pconnect comes to mind)
*) Bad error handling logic in application (say, badly handled thrown 
exception)


merlin




-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Bug in user pg_ident.conf mapping code?

2014-04-02 Thread Shaun Thomas

Hey,

So, I've gotten this on 9.3.4 after having a complaint from a user with 
a hyphen in his name, but I dug further.


This is at the top of my pg_hba.conf file:

localallallpeer map=local

And this is my pg_ident.conf:

localdude-guy   dudeguy
local/(.*)$ \1

I've reloaded configs, and even went as far as to restart the database. 
Yet the errors I get in the logs say this:


2014-04-02 10:39:59 CDT|LOG:  no match in usermap "local" for user 
"dude-guy" authenticated as "dude-guy"
2014-04-02 10:39:59 CDT|FATAL:  Peer authentication failed for user 
"dude-guy"
2014-04-02 10:39:59 CDT|DETAIL:  Connection matched pg_hba.conf line 84: 
"local all all peer map=local"


What's going on, here? Shouldn't it be checking for peer auth before 
applying the mapping? Am I missing something?


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Victor Sterpu
I followed all your advice and it is obiuos that this log will show 
exactly what I need to debug the situation.

Great tip, thank you.

-- Original Message --
From: "Merlin Moncure" 
To: "Victor Sterpu" 
Cc: "PostgreSQL General" 
Sent: 4/2/2014 7:08:08 PM
Subject: Re: [GENERAL] Lock problem


On Wed, Apr 2, 2014 at 11:00 AM, Victor Sterpu  wrote:

 All my transactions have commit or rollback.


Well, you have to verify that. There's a couple of ways to do it.
Probably the most direct is to:
1. Make sure database is logging pids in log_line_prefix (this is a
good idea all around)
2. turn on all statement logging (be advised: this can eat a lot of
log space and slow down the server).

Those two changes do not require a restart. A pg_ctl reload should
be sufficient.

Once you can do that, you should be able to locate database sessions
per pg_stat_activity that are 'idle in transaction' for a long time
without activity (anything over a second or so should be suspicious).
That will give the pid which you can then use to grep through the
statement log.

Common culprits are:
*) Dubious connection pooling solutions (php pconnect comes to mind)
*) Bad error handling logic in application (say, badly handled thrown 
exception)


merlin




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Merlin Moncure
On Wed, Apr 2, 2014 at 11:00 AM, Victor Sterpu  wrote:
> All my transactions have commit or rollback.

Well, you have to verify that.  There's a couple of ways to do it.
Probably the most direct is to:
1. Make sure database is logging pids in log_line_prefix (this is a
good idea all around)
2. turn on all statement logging (be advised: this can eat a lot of
log space and slow down the server).

Those two changes do not require a restart.   A pg_ctl reload should
be sufficient.

Once you can do that, you should be able to locate database sessions
per pg_stat_activity that are 'idle in transaction' for a long time
without activity (anything over a second or so should be suspicious).
That will give the pid which you can then use to grep through the
statement log.

Common culprits are:
*) Dubious connection pooling solutions (php pconnect comes to mind)
*) Bad error handling logic in application (say, badly handled thrown exception)

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Victor Sterpu



-- Original Message --
From: "Igor Neyman" 
To: "Victor Sterpu" ; "pgsql-general@postgresql.org" 


Sent: 4/2/2014 6:29:17 PM
Subject: RE: [GENERAL] Lock problem




From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Victor Sterpu

Sent: Wednesday, April 02, 2014 11:19 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Lock problem

Hello

I have a problem that it seems to be very hard to debug.
Problem is from some postgresql locks. I use PostgreSQL 9.1.8.

I runned this query to fid the locks:
SELECT bl.pid AS blocked_pid,
   a.usename AS blocked_user,
   kl.pid AS blocking_pid,
   ka.usename AS blocking_user,
   a.current_query AS blocked_statement
FROM pg_catalog.pg_locks bl
JOIN pg_catalog.pg_stat_activity a ON a.procpid = bl.pid
JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND 
kl.pid != bl.pid

JOIN pg_catalog.pg_stat_activity ka ON ka.procpid = kl.pid
WHERENOT bl.granted;
The result is a recursive lock.
Pid 10665 is blocked by pid 9844 and pid 9844 is blocked by 10665.
These 2 inserts are in 2 separate transactions.
Can this be a postgresql bug?

blocked_pid
blocked_user
blocking_statement
blocking_duration
blocking_pid
blocking_user
blocked_statement
blocked_duration
10665
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, 
now(), now(), NULL, null, null, NULL, NULL )

00:47:33.995919
9844
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 
10,17,0, now(), now(), NULL, null, null, NULL, NULL )

00:37:36.175607
9844
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 
10,17,0, now(), now(), NULL, null, null, NULL, NULL )

00:37:36.175607
10665
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, 
now(), now(), NULL, null, null, NULL, NULL )

00:47:33.995919
10665
postgres
 in transaction
00:55:42.876538
9830
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 
10,17,0, now(), now(), NULL, null, null, NULL, NULL )

00:37:36.175607
10680
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 
10,17,0, now(), now(), NULL, null, null, NULL, NULL )

00:37:36.175607
10665
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 
10,17,0, now(), now(), NULL, null, null, NULL, NULL )

00:31:47.211123
9844
postgres
 in transaction
00:55:42.876538
9830
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, 
now(), now(), NULL, null, null, NULL, NULL )

00:47:33.995919
10706
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, 
now(), now(), NULL, null, null, NULL, NULL )

00:47:33.995919
9844
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, 
now(), now(), NULL, null, null, NULL, NULL )

00:18:45.763758

I never use LOCK command in my application.
All locks are made by postgresql.
I use transactional support a lot.
Can someoane give some advice about how can I prevent this locking?

Thank you.


So, did you check (in pg_stat_activity) what pid 9830 is doing, because 
looks like this session is holding other sessions.

I don't see " recursive lock" in your query output.

Regards,
Igor Neyman


No, I didn't look.
I will next time I have the problem.

Thank you.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Victor Sterpu



-- Original Message --
From: "Merlin Moncure" 
To: "Victor Sterpu" 
Cc: "PostgreSQL General" 
Sent: 4/2/2014 6:49:28 PM
Subject: Re: [GENERAL] Lock problem


On Wed, Apr 2, 2014 at 10:19 AM, Victor Sterpu  wrote:


 Hello

 I have a problem that it seems to be very hard to debug.
 Problem is from some postgresql locks. I use PostgreSQL 9.1.8.

 I runned this query to fid the locks:

 SELECT bl.pid AS blocked_pid,
a.usename AS blocked_user,
kl.pid AS blocking_pid,
ka.usename AS blocking_user,
a.current_query AS blocked_statement
 FROM pg_catalog.pg_locks bl
 JOIN pg_catalog.pg_stat_activity a ON a.procpid = bl.pid
 JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid 
AND kl.pid != bl.pid

 JOIN pg_catalog.pg_stat_activity ka ON ka.procpid = kl.pid
 WHERENOT bl.granted;

 The result is a recursive lock.
 Pid 10665 is blocked by pid 9844 and pid 9844 is blocked by 10665.
 These 2 inserts are in 2 separate transactions.
 Can this be a postgresql bug?

 blocked_pid blocked_user blocking_statement blocking_duration 
blocking_pid blocking_user blocked_statement blocked_duration



 10665 postgres  in transaction


" in transaction" is a locking red flag. It means your
application has opened a transaction and is sitting there holding the
transaction open. This is a very common cause of subtle application
locking bugs. It can be legit if the application is doing heavy
processing during a transaction or you simply raced to an idle
transaction in pg_stat_activity, but in my experience 95%+ of the time
it means transaction leakage which in turn leads to locking problems.

merlin


This is weird because all of my transaction have commit or rollback. I 
don't leave unterminated transactions.

I can't see how this is happening.
Is there a possibility that some network problems generate this problem?
If this is the case is these some server protection for this situation?

But why a unterminated transaction blocks all table operations?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Victor Sterpu



-- Original Message --
From: "Tom Lane" 
To: "Victor Sterpu" 
Cc: pgsql-general@postgresql.org
Sent: 4/2/2014 6:31:13 PM
Subject: Re: [GENERAL] Lock problem


"Victor Sterpu"  writes:

 I have a problem that it seems to be very hard to debug.
 Problem is from some postgresql locks. I use PostgreSQL 9.1.8.


You haven't actually explained what your problem is.


 I runned this query to fid the locks:
 SELECT bl.pid AS blocked_pid, a.usename AS blocked_user, kl.pid AS
 blocking_pid, ka.usename AS blocking_user, a.current_query AS
 blocked_statement FROM pg_catalog.pg_locks bl JOIN
 pg_catalog.pg_stat_activity a ON a.procpid = bl.pid JOIN
 pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND 
kl.pid

 != bl.pid JOIN pg_catalog.pg_stat_activity ka ON ka.procpid = kl.pid
 WHERENOT bl.granted;


This query proves little. It might find two different transactions
waiting for the same transactionid, but it doesn't show that one is
waiting for the other. They could both be waiting for some third
transaction.

   regards, tom lane


Problem is that my application is hanging because of this locks and I 
can't point the problem.

What query would prove more?
All my transactions have commit or rollback.
I don't know how to fix or how to begin to find the problem

Thank you.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Merlin Moncure
On Wed, Apr 2, 2014 at 10:19 AM, Victor Sterpu  wrote:
>
> Hello
>
> I have a problem that it seems to be very hard to debug.
> Problem is from some postgresql locks. I use PostgreSQL 9.1.8.
>
> I runned this query to fid the locks:
>
> SELECT bl.pid  AS blocked_pid,
>a.usename   AS blocked_user,
>kl.pid  AS blocking_pid,
>ka.usename  AS blocking_user,
>a.current_query AS blocked_statement
> FROM  pg_catalog.pg_locksbl
> JOIN pg_catalog.pg_stat_activity a  ON a.procpid = bl.pid
> JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid 
> AND kl.pid != bl.pid
> JOIN pg_catalog.pg_stat_activity ka ON ka.procpid = kl.pid
> WHERENOT bl.granted;
>
> The result is a recursive lock.
> Pid 10665 is blocked by pid 9844 and pid 9844 is blocked by 10665.
> These 2 inserts are in 2 separate transactions.
> Can this be a postgresql bug?
>
> blocked_pid blocked_user blocking_statement blocking_duration blocking_pid 
> blocking_user blocked_statement blocked_duration

> 10665 postgres  in transaction

" in transaction" is a locking red flag.  It means your
application has opened a transaction and is sitting there holding the
transaction open.  This is a very common cause of subtle application
locking bugs. It can be legit if the application is doing heavy
processing during a transaction or you simply raced to an idle
transaction in pg_stat_activity, but in my experience 95%+ of the time
it means transaction leakage which in turn leads to locking problems.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Vacuum Taking a Long Time

2014-04-02 Thread Hannes Erven

Hi Eliot,


> If I continue to run vacuum
> analyze on it, it continues to take about 90 seconds each time.
> [..]
> I was under the impression that if nothing had been done to the table
> since it was last vacuumed, that it would return immediately.

in addition to what others already said, I'd expect that by specifying 
ANALYZE you are forcing PostgreSQL to do quite expensive additional work.


Is there a reason why you cannot (or do not want to) rely on 
autovaccuum? Have you tried tuning the parameters?


Even if you must use manual VACUUMs, you can probably get away without 
ANALZE.



Best regards,

-hannes


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Tom Lane
"Victor Sterpu"  writes:
> I have a problem that it seems to be very hard to debug.
> Problem is from some postgresql locks. I use PostgreSQL 9.1.8.

You haven't actually explained what your problem is.

> I runned this query to fid the locks:
> SELECT bl.pid AS blocked_pid, a.usename AS blocked_user, kl.pid AS 
> blocking_pid, ka.usename AS blocking_user, a.current_query AS 
> blocked_statement FROM pg_catalog.pg_locks bl JOIN 
> pg_catalog.pg_stat_activity a ON a.procpid = bl.pid JOIN 
> pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND kl.pid 
> != bl.pid JOIN pg_catalog.pg_stat_activity ka ON ka.procpid = kl.pid 
> WHERENOT bl.granted;

This query proves little.  It might find two different transactions
waiting for the same transactionid, but it doesn't show that one is
waiting for the other.  They could both be waiting for some third
transaction.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lock problem

2014-04-02 Thread Igor Neyman


From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Victor Sterpu
Sent: Wednesday, April 02, 2014 11:19 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Lock problem

Hello
 
I have a problem that it seems to be very hard to debug.
Problem is from some postgresql locks. I use PostgreSQL 9.1.8.
 
I runned this query to fid the locks:
SELECT bl.pid  AS blocked_pid,
   a.usename   AS blocked_user,
   kl.pid  AS blocking_pid,
   ka.usename  AS blocking_user,
   a.current_query AS blocked_statement
FROM  pg_catalog.pg_locksbl
JOIN pg_catalog.pg_stat_activity a  ON a.procpid = bl.pid
JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND 
kl.pid != bl.pid
JOIN pg_catalog.pg_stat_activity ka ON ka.procpid = kl.pid
WHERENOT bl.granted;
The result is a recursive lock.
Pid 10665 is blocked by pid 9844 and pid 9844 is blocked by 10665.
These 2 inserts are in 2 separate transactions.
Can this be a postgresql bug?
 
blocked_pid
blocked_user
blocking_statement
blocking_duration
blocking_pid
blocking_user
blocked_statement
blocked_duration
10665
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:47:33.995919
9844
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:37:36.175607
9844
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:37:36.175607
10665
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:47:33.995919
10665
postgres
 in transaction
00:55:42.876538
9830
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:37:36.175607
10680
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:37:36.175607
10665
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:31:47.211123
9844
postgres
 in transaction
00:55:42.876538
9830
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:47:33.995919
10706
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:47:33.995919
9844
postgres
INSERT INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )
00:18:45.763758
 
I never use LOCK command in my application.
All locks are made by postgresql.
I use transactional support a lot.
Can someoane give some advice about how can I prevent this locking?
 
Thank you.
 

So, did you check (in pg_stat_activity) what pid 9830 is doing, because looks 
like this session is holding other sessions.
I don't see " recursive lock" in your query output.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Lock problem

2014-04-02 Thread Victor Sterpu

Hello

I have a problem that it seems to be very hard to debug.
Problem is from some postgresql locks. I use PostgreSQL 9.1.8.

I runned this query to fid the locks:
SELECT bl.pid AS blocked_pid, a.usename AS blocked_user, kl.pid AS 
blocking_pid, ka.usename AS blocking_user, a.current_query AS 
blocked_statement FROM pg_catalog.pg_locks bl JOIN 
pg_catalog.pg_stat_activity a ON a.procpid = bl.pid JOIN 
pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND kl.pid 
!= bl.pid JOIN pg_catalog.pg_stat_activity ka ON ka.procpid = kl.pid 
WHERENOT bl.granted;

The result is a recursive lock.
Pid 10665 is blocked by pid 9844 and pid 9844 is blocked by 10665.
These 2 inserts are in 2 separate transactions.
Can this be a postgresql bug?

blocked_pidblocked_userblocking_statementblocking_durationblocking_pidblocking_userblocked_statementblocked_duration10665postgresINSERT 
INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )00:47:33.9959199844postgresINSERT 
INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )00:37:36.1756079844postgresINSERT 
INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )00:37:36.17560710665postgresINSERT 
INTO paraclinic_results VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), 
now(), NULL, null, null, NULL, NULL  )00:47:33.99591910665postgres 
in transaction00:55:42.8765389830postgresINSERT INTO paraclinic_results 
VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), now(), NULL, null, 
null, NULL, NULL  )00:37:36.17560710680postgresINSERT INTO 
paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )00:37:36.17560710665postgresINSERT 
INTO paraclinic_results VALUES (DEFAULT,1075033, 248.54, 10,17,0, now(), 
now(), NULL, null, null, NULL, NULL  )00:31:47.2111239844postgres 
in transaction00:55:42.8765389830postgresINSERT INTO paraclinic_results 
VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), now(), NULL, null, null, 
NULL, NULL  )00:47:33.99591910706postgresINSERT INTO paraclinic_results 
VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), now(), NULL, null, null, 
NULL, NULL  )00:47:33.9959199844postgresINSERT INTO paraclinic_results 
VALUES (DEFAULT,1074986, 10.4, 15,15,0, now(), now(), NULL, null, null, 
NULL, NULL  )00:18:45.763758


I never use LOCK command in my application.
All locks are made by postgresql.
I use transactional support a lot.
Can someoane give some advice about how can I prevent this locking?

Thank you.



Re: [GENERAL] Insert zero to auto increment serial column

2014-04-02 Thread Andrew Sullivan
On Wed, Apr 02, 2014 at 08:17:12AM -0400, loc wrote:
> value, MySQL also works this way.  With PostgreSQL I will need to do a lot
> of code modification to my Aubit4GL programs, since I will need to either
> insert with the key word default or omit the serial column in the insert
> statement.

Why don't you put a DO INSTEAD trigger or rule (I'd suggest the
former) when you put in a 0 to do nextval() instead on the sequence
for the column?

Best regards,

A

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Vacuum Taking a Long Time

2014-04-02 Thread Alvaro Herrera
Eliot Gable wrote:
> I have a table which is about 12 GB in size. It has had a vacuum full
> analyze run on it, and then immediately after, I run vacuum analyze and it
> takes about 90 seconds to complete. If I continue to run vacuum analyze on
> it, it continues to take about 90 seconds each time. This system has a
> single 7200 RPM drive in it, so it's not a very fast drive. I was under the
> impression that if nothing had been done to the table since it was last
> vacuumed, that it would return immediately. Further, this is an append-only
> table, so why should it need to be vacuumed at all? We ran into cases where
> after writing to it long enough, the PGSQL autovacuum process would kick in
> and force a vacuum saying something about preventing wrap around. I don't
> understand why it would do this if it is append-only and we are using
> 64-bit sequences as IDs without OIDs turned on. What would be wrapping
> around without a vacuum? We tried to mitigate this by manually running
> vacuum programmatically, but then we end up using all the disk IO just
> running vacuum all the time, because it is constantly running through the
> entire table even though very little (if anything) has been done to it
> since the last vacuum.

Vacuuming a table requires first scanning the heap, then scanning each
indexes.  The heap scan can be optimized using the visibility map: pages
can be skipped if they are marked all-visible.  Vacuum will mark pages
all-visible, so a second vacuum immediately thereafter would need to
read few pages (and probably modify even fewer).  But the index scans
cannot be optimized in that fashion; indexes must be scanned completely
each time.  I would bet that that's where the time is going.  Also note
that not all pages can be marked all-visible, and that pages are only
skipped if there are enough of them consecutive that it's worth when
considering disk readahead done by the operating system.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Vacuum Taking a Long Time

2014-04-02 Thread François Beausoleil
Hi!

Le 2014-04-02 à 10:26, Eliot Gable a écrit :

> I have a table which is about 12 GB in size. It has had a vacuum full analyze 
> run on it, and then immediately after, I run vacuum analyze and it takes 
> about 90 seconds to complete. If I continue to run vacuum analyze on it, it 
> continues to take about 90 seconds each time. This system has a single 7200 
> RPM drive in it, so it's not a very fast drive. I was under the impression 
> that if nothing had been done to the table since it was last vacuumed, that 
> it would return immediately. Further, this is an append-only table, so why 
> should it need to be vacuumed at all? We ran into cases where after writing 
> to it long enough, the PGSQL autovacuum process would kick in and force a 
> vacuum saying something about preventing wrap around. I don't understand why 
> it would do this if it is append-only and we are using 64-bit sequences as 
> IDs without OIDs turned on. What would be wrapping around without a vacuum? 
> We tried to mitigate this by manually running vacuum programmatically, but 
> then we end up using all the disk IO just running vacuum all the time, 
> because it is constantly running through the entire table even though very 
> little (if anything) has been done to it since the last vacuum.
> 
> Is this described behavior expected? If so, why?

You don't mention the version of PostgreSQL, but let me link you to this page 
in the manual:

23.1.5. Preventing Transaction ID Wraparound Failures

"""But since transaction IDs have limited size (32 bits) a cluster that runs 
for a long time (more than 4 billion transactions) would suffer transaction ID 
wraparound"""

http://www.postgresql.org/docs/9.3/static/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND

This is what the message in the log said.

Because the table is append only, old rows must receive the transaction ID 
which is guaranteed to be lower than all other transactions: FrozenXID.

If you run two vacuums back to back, presumably the table is mostly in RAM and 
returns more quickly the next time around, hence the very different runtimes.

You say "if nothing had been done to the table": do you have a single very 
large append-only table in your cluster? If so, it might be better to split the 
table in multiple partitions, and then when a partition is untouched, the 
vacuum daemon should not touch the table.

Please provide your exact PostgreSQL version, RAM, disk and other details, 
relevant postgresql.conf parameters so that we may help more.

Cheers!
François Beausoleil
Seevibes



smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] Table Vacuum Taking a Long Time

2014-04-02 Thread Eliot Gable
I have a table which is about 12 GB in size. It has had a vacuum full
analyze run on it, and then immediately after, I run vacuum analyze and it
takes about 90 seconds to complete. If I continue to run vacuum analyze on
it, it continues to take about 90 seconds each time. This system has a
single 7200 RPM drive in it, so it's not a very fast drive. I was under the
impression that if nothing had been done to the table since it was last
vacuumed, that it would return immediately. Further, this is an append-only
table, so why should it need to be vacuumed at all? We ran into cases where
after writing to it long enough, the PGSQL autovacuum process would kick in
and force a vacuum saying something about preventing wrap around. I don't
understand why it would do this if it is append-only and we are using
64-bit sequences as IDs without OIDs turned on. What would be wrapping
around without a vacuum? We tried to mitigate this by manually running
vacuum programmatically, but then we end up using all the disk IO just
running vacuum all the time, because it is constantly running through the
entire table even though very little (if anything) has been done to it
since the last vacuum.

Is this described behavior expected? If so, why?

If it is not expected, what should I be looking for which might explain why
it is taking so long?

Thanks.

-- 
Eliot Gable

"We do not inherit the Earth from our ancestors: we borrow it from our
children." ~David Brower

"I decided the words were too conservative for me. We're not borrowing from
our children, we're stealing from them--and it's not even considered to be
a crime." ~David Brower

"Esse oportet ut vivas, non vivere ut edas." (Thou shouldst eat to live;
not live to eat.) ~Marcus Tullius Cicero


Re: [GENERAL] Insert zero to auto increment serial column

2014-04-02 Thread Shaun Thomas

On 04/02/2014 08:24 AM, Scott Marlowe wrote:


Triggers are almost always better here and really aren't that hard to
write.


Not only do I have to agree with this statement, I feel it's important 
to add a clarification regarding your original question.


Translating serial values submitted as 0 to something else is will 
likely never be a PostgreSQL feature, as it is probably a violation of 
the SQL standard. Databases are meant to store exactly what you submit, 
barring any modifications by stored procedure or trigger.


If a database engine takes it upon itself to modify your data, it is no 
longer a database, but an application that mimics one.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert zero to auto increment serial column

2014-04-02 Thread David Johnston
loc wrote
> Setting the serial column to null to auto increment would also
> work for me.

Can you set it to a literal value DEFAULT?  Only helps for the insert case
(not copy) but that is the mechanism that is used to specify a column and
ask for the default.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Insert-zero-to-auto-increment-serial-column-tp5798318p5798336.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert zero to auto increment serial column

2014-04-02 Thread Adrian Klaver

On 04/02/2014 05:17 AM, loc wrote:

I'm currently using an Informix Innovator-C database with Aubit4GL and I
would like to migrate to PostgreSQL, it looks like the transition will
not be too difficult, however there is one feature that I would like
added to PostgreSQL.  Where is the best place to request a feature add?
With Informix inserting a 0 into a serial column will auto increment the
serial value, MySQL also works this way.  With PostgreSQL I will need to
do a lot of code modification to my Aubit4GL programs, since I will need
to either insert with the key word default or omit the serial column in
the insert statement.  A typical insert with Aubit4GL looks like this:

create table table_name (link serial, col2 integer, col3 integer )
define var_rec record like table_name.*
let table_name.link = 0
insert into table_name values(var_rec.*)

As you can see, with Informix the serial column is set to 0 before the
insert, with PostgreSQL I will need to list all the columns and will not
be able to use the wildcard syntax, which supplies all the column names
to the insert.  Setting the serial column to null to auto increment
would also work for me.


Looks like Aubit4GL has provisions for doing the conversions behind the 
scenes:


http://aubit4gl.sourceforge.net/aubit4gldoc/manual/index.html

and for a more detailed example:

http://openssa.sourceforge.net/aubit4gl.html

Might be worth asking for more info on the Aubit4GL list:

https://lists.sourceforge.net/lists/listinfo/aubit4gl-discuss







--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert zero to auto increment serial column

2014-04-02 Thread Scott Marlowe
On Wed, Apr 2, 2014 at 6:53 AM, Jayadevan M  wrote:
> Will a rule work?
> http://www.postgresql.org/docs/9.3/static/sql-createrule.html

There are a couple of issues you face if you use a rule, copy commands
ignore rules, and rules are slower.

Triggers are almost always better here and really aren't that hard to write.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Insert zero to auto increment serial column

2014-04-02 Thread Jayadevan M
Will a rule work?
http://www.postgresql.org/docs/9.3/static/sql-createrule.html


On Wed, Apr 2, 2014 at 5:47 PM, loc  wrote:

> I'm currently using an Informix Innovator-C database with Aubit4GL and I
> would like to migrate to PostgreSQL, it looks like the transition will not
> be too difficult, however there is one feature that I would like added to
> PostgreSQL.  Where is the best place to request a feature add?  With
> Informix inserting a 0 into a serial column will auto increment the serial
> value, MySQL also works this way.  With PostgreSQL I will need to do a lot
> of code modification to my Aubit4GL programs, since I will need to either
> insert with the key word default or omit the serial column in the insert
> statement.  A typical insert with Aubit4GL looks like this:
>
> create table table_name (link serial, col2 integer, col3 integer )
> define var_rec record like table_name.*
> let table_name.link = 0
> insert into table_name values(var_rec.*)
>
> As you can see, with Informix the serial column is set to 0 before the
> insert, with PostgreSQL I will need to list all the columns and will not be
> able to use the wildcard syntax, which supplies all the column names to the
> insert.  Setting the serial column to null to auto increment would also
> work for me.
>
>
>


Re: [GENERAL] Insert zero to auto increment serial column

2014-04-02 Thread Albe Laurenz
loc wrote:
> I'm currently using an Informix Innovator-C database with Aubit4GL and I 
> would like to migrate to
> PostgreSQL, it looks like the transition will not be too difficult, however 
> there is one feature that
> I would like added to PostgreSQL.  Where is the best place to request a 
> feature add?  With Informix
> inserting a 0 into a serial column will auto increment the serial value, 
> MySQL also works this way.
> With PostgreSQL I will need to do a lot of code modification to my Aubit4GL 
> programs, since I will
> need to either insert with the key word default or omit the serial column in 
> the insert statement.  A
> typical insert with Aubit4GL looks like this:
> 
> create table table_name (link serial, col2 integer, col3 integer )
> define var_rec record like table_name.*
> let table_name.link = 0
> insert into table_name values(var_rec.*)
> 
> 
> As you can see, with Informix the serial column is set to 0 before the 
> insert, with PostgreSQL I will
> need to list all the columns and will not be able to use the wildcard syntax, 
> which supplies all the
> column names to the insert.  Setting the serial column to null to auto 
> increment would also work for
> me.

If you don't want to modify your code, you could write a FOR EACH ROW BEFORE 
INSERT trigger
for the table that pulls the next value from the sequence if you insert 0.

I don't think that there is any chance that PostgreSQL's behaviour in this
case will be modified - first, the current behaviour is fine, and second,
such a change will break existing applications.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Insert zero to auto increment serial column

2014-04-02 Thread loc
I'm currently using an Informix Innovator-C database with Aubit4GL and I
would like to migrate to PostgreSQL, it looks like the transition will not
be too difficult, however there is one feature that I would like added to
PostgreSQL.  Where is the best place to request a feature add?  With
Informix inserting a 0 into a serial column will auto increment the serial
value, MySQL also works this way.  With PostgreSQL I will need to do a lot
of code modification to my Aubit4GL programs, since I will need to either
insert with the key word default or omit the serial column in the insert
statement.  A typical insert with Aubit4GL looks like this:

create table table_name (link serial, col2 integer, col3 integer )
define var_rec record like table_name.*
let table_name.link = 0
insert into table_name values(var_rec.*)

As you can see, with Informix the serial column is set to 0 before the
insert, with PostgreSQL I will need to list all the columns and will not be
able to use the wildcard syntax, which supplies all the column names to the
insert.  Setting the serial column to null to auto increment would also
work for me.