Re: [GENERAL] archive_command not being executed

2017-11-14 Thread Eric D
Hi Jeremy,


Thanks for the info on 9.3 vs 9.5.  While searching for Paul's suggestion I'd 
seen the same thing, but didn't "reply all" so my response didn't make it into 
the mailing list.


Regarding your question, as I understand the process, you need to get the WAL 
files being shipped over to the standby before you start streaming replication. 
 The sequence of events I think is supposed to happen is:


  1.  Start WAL file shipping from master->standby
  2.  Run pg_basebackup from master->standby
  3.  With a recovery.conf in place, start the standby.  The recovery.conf 
needs the connection info for the master for the streaming replication, but it 
also needs the location of the WAL files, to replay what's been going on during 
the pg_basebackup.

I don't think streaming replication has a way to pick up the activity covered 
in the WAL files.



From: Jeremy Schneider <schnei...@ardentperf.com>
Sent: Monday, November 13, 2017 3:56 PM
To: eric...@hotmail.com
Cc: PostgreSQL General; Paul Jungwirth
Subject: Re: [GENERAL] archive_command not being executed

Hi Eric,

Thanks for using PostgreSQL!

On Fri, Nov 10, 2017 at 9:26 AM, Paul Jungwirth
<p...@illuminatedcomputing.com> wrote:
> Oh this has happened to me before. :-) On SB1 you need to set
> archive_mode to always (not on). Otherwise it is ignored when running as a
> standby.

It looks to me like this feature was not added until 9.5 and Eric is
running 9.3  :(

> On 11/10/2017 09:10 AM, Eric D wrote:
>> I have a standby db server (SB1) that will soon become the master.  SB1
>> is set up with streaming replication from the current master.  I'm
>> trying to set up a third server (SB2) as a slave/standby to SB1, so that
>> when SB1 becomes the master, there will be a standby for it.  First step
>> is to get WAL files shipped from SB1->SB2.

Eric,

>From my reading of the docs and commit logs, standby databases
couldn't archive their WALs until 9.5.

https://www.postgresql.org/message-id/cankgpbs7qgakgq-opzy0esam6+wue5mgpyehcgo_eoq7tjv...@mail.gmail.com
http://paquier.xyz/postgresql-2/postgres-9-5-feature-highlight-archive-mode-always/
Postgres 9.5 feature highlight - archive_mode = 
always<http://paquier.xyz/postgresql-2/postgres-9-5-feature-highlight-archive-mode-always/>
paquier.xyz
Postgres 9.5 feature highlight - archive_mode = always




There is a section in the 9.3 docs about cascading replication:
https://www.postgresql.org/docs/9.3/static/warm-standby.html#CASCADING-REPLICATION
PostgreSQL: Documentation: 9.3: Log-Shipping Standby 
Servers<https://www.postgresql.org/docs/9.3/static/warm-standby.html#CASCADING-REPLICATION>
www.postgresql.org
25.2. Log-Shipping Standby Servers. Continuous archiving can be used to create 
a high availability (HA) cluster configuration with one or more standby servers 
ready ...




It seems to me this is based on streaming replication only.  Looks
like it's not designed (in 9.3) to start archiving until it becomes a
primary.  "You will also need to set primary_conninfo in the
downstream standby to point to the cascading standby."

Are you able to configure a cascading replica by using streaming
replication on your 9.3 system, without WAL archiving on the standby?

-Jeremy

--
http://about.me/jeremy_schneider
[https://aboutme.imgix.net/background/users/j/e/r/jeremy_schneider_1364416900_46.jpg?q=80=1=format=max=250=140=0,0,1396,732]<http://about.me/jeremy_schneider>

Jeremy Schneider on about.me<http://about.me/jeremy_schneider>
about.me
I am an Engineer, consultant, and Trainer in the United States. Read my blog.





Re: [GENERAL] archive_command not being executed

2017-11-13 Thread Eric D
I'd come to the same conclusion Jeremy had about 9.5 being the first version 
with the 'always' option for archive_mode.   Looking at pg_receivexlog, that 
might work, but with me being a total noob I'm wary of the various steps I'd 
have to take in going from:


Master -> streaming replication to -> SB1 -> pg_receivexlog to -> SB2


to:


New Master (Old SB1) -> streaming replication to -> SB2


And whether or not the conversion from pg_receivexlog to normal streaming 
replication would maintain data integrity.  I need to skew this towards 
simplicity or I'll likely screw it up.


My current thought is to cut off master, promote SB1, set up WAL file shipping 
to SB2, start a pg_basebackup, make SB1 live, then run for a couple days with 
no backup as the pg_basebackup runs.   Far from ideal but at least I have gone 
through most of this before.



From: Michael Paquier <michael.paqu...@gmail.com>
Sent: Monday, November 13, 2017 6:01 PM
To: Jeremy Schneider
Cc: eric...@hotmail.com; PostgreSQL General; Paul Jungwirth
Subject: Re: [GENERAL] archive_command not being executed

On Tue, Nov 14, 2017 at 8:56 AM, Jeremy Schneider
<schnei...@ardentperf.com> wrote:
> From my reading of the docs and commit logs, standby databases
> couldn't archive their WALs until 9.5.

pg_receivexlog is available in 9.3. You could leverage your archives
with it easily, by for example connecting it to a standby you'd like
to get the archives from.
--
Michael


[GENERAL] archive_command not being executed

2017-11-10 Thread Eric D
I have a standby db server (SB1) that will soon become the master.  SB1 is set 
up with streaming replication from the current master.  I'm trying to set up a 
third server (SB2) as a slave/standby to SB1, so that when SB1 becomes the 
master, there will be a standby for it.  First step is to get WAL files shipped 
from SB1->SB2.


Problem is I can't get the archive_command of SB1 to execute.  The same 
archive_command on our current master calls into a script that scp's the WAL 
files to SB1.  This works.  I've also confirmed the script works from SB1->SB2 
when run from the command line, as the postgres user, w/o a password request.


Furthermore, on SB1 I replaced the call to the script with a touch command, to 
see if archive_command was getting called at all:


# This script does the scp of the WAL file to the designated server/directory
#archive_command = '/opt/printra/sbin/handle_wal_files.sh %p db-backup 
/backup/WAL_ARCHIVE/auto'
archive_command = 'touch /var/lib/pgsql/testtouch'


and the testtouch file did not appear.


Here are some relevant settings from SB1:


postgres=# select * from pg_settings where name like 'archive%';

-[ RECORD 1 
]--

name   | archive_command

setting| touch /var/lib/pgsql/testtouch

unit   |

category   | Write-Ahead Log / Archiving

short_desc | Sets the shell command that will be called to archive a WAL file.

extra_desc |

context| sighup

vartype| string

source | configuration file

min_val|

max_val|

enumvals   |

boot_val   |

reset_val  | touch /var/lib/pgsql/testtouch

sourcefile | /var/lib/pgsql/9.3/data/postgresql.conf

sourceline | 607

-[ RECORD 2 
]--

name   | archive_mode

setting| on

unit   |

category   | Write-Ahead Log / Archiving

short_desc | Allows archiving of WAL files using archive_command.

extra_desc |

context| postmaster

vartype| bool

source | configuration file

min_val|

max_val|

enumvals   |

boot_val   | off

reset_val  | on

sourcefile | /var/lib/pgsql/9.3/data/postgresql.conf

sourceline | 604

-[ RECORD 3 
]--

name   | archive_timeout

setting| 0

unit   | s

category   | Write-Ahead Log / Archiving

short_desc | Forces a switch to the next xlog file if a new file has not been 
started within N seconds.

extra_desc |

context| sighup

vartype| integer

source | default

min_val| 0

max_val| 1073741823

enumvals   |

boot_val   | 0

reset_val  | 0

sourcefile |

sourceline |



postgres=# select version();

-[ RECORD 1 
]---

version | PostgreSQL 9.3.19 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 
4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit


I've bumped up logging to debug1 and have not seen anything that looks related 
to the archive_command.  Bumping up to debug2 was really noisy and didn't seem 
to yield any useful info either.  Any ideas?


Re: [GENERAL] Aquameta 0.1 - Request for reviews, contributors

2017-09-12 Thread Eric Hanson
Thanks Nico.  I definitely like this syntax better.

--
Eric Hanson
CEO, Aquameta Labs
503-929-1073
www.aquameta.com


On Fri, Sep 8, 2017 at 4:26 PM, Nico Williams <n...@cryptonector.com> wrote:

>
> Here's a review comment.  Just one for now.
>
> Looking at the meta module, I see things like this:
>
> execute 'select (count(*) = 1) from ' || 
> quote_ident((row_id::meta.schema_id).name)
> || '.' || quote_ident((row_id::meta.relation_id).name) ||
> ' where ' || quote_ident((row_id.pk_column_id).name) || '
> = ' || quote_literal(row_id.pk_value)
> into answer;
>
> I recently learned what I find to be a better idiom:
>
> execute format(
> $q$
> select exists (select *
>from %1$I.%2$I
>where %3$I = %4$L);
> $q$,
> -- interpolated arguments here
> (row_id::meta.schema_id).name, (row_id::meta.relation_id).name,
> (row_id.pk_column_id).name, row_id.pk_value
>   into answer;
>
> That is, PostgreSQL has extended string literal syntax where you can use
> $stuff$ instead of single-quotes, and that makes it much easier to write
> dynamic (generated for EXECUTE) SQL.  In particular, because your
> $EDITOR [generally] won't recognize this, syntax highlighting for the
> $quoted$ code will work as expected!
>
> This is better not only because it's more concise, easier to line-wrap,
> and easier on the eyes, but also because you get to use format().  I
> suspect using format() makes it harder to forget to quote something
> appropriately -- harder to accidentally create a SQL injection
> vulnerability.  I usually use argument numbering (%$I) instead of
> referring to the positionally (%I, %L, %s) because it helps a lot
> whenever I need to refer to one of them multiple times.
>
> Of course, this is just a matter of style, but I strongly feel that this
> is the superior style (at least I find or stumble into a better style),
> especially when you have several layers of trigger functions creating
> more trigger functions, as you can easily nest $foo$-quoted string
> literals by having different quote forms for each level.
>
> Also, I used exists() instead of count(*) = 1 -- that's just my personal
> preference, and a less defensible style matter (it is more verbose...).
>
> Nico
> --
>


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-12 Thread Eric Lemoine

> Another idea, if you haven't tried it already, is to run these test cases
> in a server built with --enable-debug and --enable-cassert.  The memory
> clobber stuff that's enabled by the latter is very good at turning coding
> errors into reproducible, debuggable crashes ;-)
> 
>   regards, tom lane


We found the cause of the issue! Both the PostGIS and the Pointcloud
extensions define the hexbytes_from_bytes function, and the PostGIS
version is faster. The fix involves prefixing the function name in
Pointcloud, and using a similar implementation as PostGIS [*].

Thanks a lot for helping me fix that issue.

[*] 


-- 
Éric Lemoine
Oslandia


<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-09 Thread Eric Lemoine
On 06/08/2017 10:41 PM, Éric wrote:
> 
> 
> 
>> Have you experimented with other queries that don't involve PostGIS?
>> I'm wondering if your hook-installation code fails to work properly
>> unless PostGIS was loaded first.  This would be easier to credit if
>> there are hooks both extensions try to get into.
> 
> 
> I think you're right on Tom. It looks like I cannot reproduce the issue if I 
> start by calling a PostGIS function rather than a Pointcloud function. So it 
> may well be a conflict between PostGIS and Pointcloud. Both use fn_extra, and 
> that makes we wonder. This old thread [*] makes me wonder too! I still need 
> to figure out the bug, but I can see some light now! thanks
> 
> [*] 
> 


I now think that the performance bug is not related to the fn_extra
thing. I had hope but not anymore :) I don't see where the Pointcloud
and PostGIS extensions could conflict.

-- 
Éric Lemoine
Oslandia
+33 1 86 95 95 55
<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Eric Lemoine
On 06/08/2017 07:27 PM, Moreno Andreo wrote:
> Il 08/06/2017 19:10, Eric Lemoine ha scritto:
>>
>> How can such a thing happen? Thanks for any insight on what could cause
>> this.
>>
>>
> I'd try raising shared_buffers to 1 GB or something near 40% of the
> available memory

I tried to make it 4G, but it does not make a difference. My machine has
16G of RAM.


> 
> If you run the query again, after getting bad results, what do you get?

Always bad results.

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select pc_typmod_pcid(1);
 pc_typmod_pcid

  1
(1 row)

Time: 4.887 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));


Time: 3522.135 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 3395.672 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));

Time: 3454.466 ms



The initial "select pc_typmod_pcid(1)" query completely screws the
connection.

"select pc_typmod_pcid(1)" is just an example of a simple query that
triggers the problem. There are many others. But it has to be a query
using the Pointcloud extension.

I have no problem if I start with the main query (my "select points
from" query of interest). And running the "select pc_typmod_pcid(1)"
query in the middle does not cause any problem. It has to be run first
on the connection to do the harm. See below.

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 280.117 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 210.080 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 233.095 ms
lopocs=# select pc_typmod_pcid(1);
 pc_typmod_pcid

  1
(1 row)

Time: 0.686 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 199.150 ms




-- 
Éric Lemoine
Oslandia
<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Eric Lemoine

> Note that the execution time is 46 ms when the query is wrapped in an
> explain analyze (while it's 3 s when it's not!)


Actually, it seems to me that the performance issue is not on the query
itself, it is on the fetching of the data returned by the query. Which
explains why the query is fast when executed in an explain analyze. I've
observed this by using a cursor.

The query returns 2506 rows. I use a cursor to fetch the resulting rows
500 by 500. The fetching of 500 rows (fetch 500 from c) takes about 50
ms in the good/normal case, i.e. when the "select pc_typmod_pcid(1)" is
not executed first. While it takes around 600 ms in the pathological case!

Below is the full test case.


Good case:

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# begin;
BEGIN
Time: 0.373 ms
lopocs=# declare c cursor for select points from public.sthelens where
pc_intersects(points, st_geomfromtext('polygon ((-2357334.41980829
-3742654.00016992, -2356120.91980829 -3742654.00016992,
-2356120.91980829 -3741278.00016992, -2357334.41980829
-3741278.00016992, -2357334.41980829 -3742654.00016992))', 4978));
DECLARE CURSOR
Time: 75.976 ms
lopocs=# fetch 500 from c;


Time: 44.648 ms
lopocs=# fetch 500 from c;
Time: 40.693 ms
lopocs=# fetch 500 from c;
Time: 45.218 ms


Base case:

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select pc_typmod_pcid(1);  -- that screws up everything
 pc_typmod_pcid

  1
(1 row)

Time: 5.702 ms
lopocs=# begin;
BEGIN
Time: 0.234 ms
lopocs=# declare c cursor for select points from public.sthelens where
pc_intersects(points, st_geomfromtext('polygon ((-2357334.41980829
-3742654.00016992, -2356120.91980829 -3742654.00016992,
-2356120.91980829 -3741278.00016992,
-2357334.41980829 -3741278.00016992, -2357334.41980829
-3742654.00016992))', 4978));
DECLARE CURSOR
Time: 76.806 ms
lopocs=# fetch 500 from c;
Time: 669.834 ms
lopocs=# fetch 500 from c;
Time: 652.738 ms
lopocs=# fetch 500 from c;
Time: 604.293 ms




How can such a thing happen? Thanks for any insight on what could cause
this.


-- 
Éric Lemoine
Oslandia
+33 1 86 95 95 55
<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Eric Lemoine

> Lots of missing information here ...
> 
> Is there an index on public.sthelens.points?

Yes, there are.

lopocs=# \d sthelens;
   Table "public.sthelens"
 Column |Type|   Modifiers
++---
 id | integer| not null default nextval('sthelens_id_seq'::regclass)
 points | pcpatch(2) |
 morton | bigint |
Indexes:
"sthelens_pkey" PRIMARY KEY, btree (id)
"sthelens_pc_envelopegeometry_idx" gist (pc_envelopegeometry(points))

So two indices, one for the primary key, and a Postgis index on the
Postgis geometry returned by the Pointcloud pc_envelopegeometry function.


> How many rows are in that table?

30971



> What are your shared_buffers settings?

128 MB (Debian unstable)


> How much RAM does the server have?

16 GB

> What does EXPLAIN look like for that query? How large (in bytes) are the
> tables in question?


 QUERY PLAN



 Bitmap Heap Scan on sthelens  (cost=383.26..2496.67 rows=2065 width=32)
(actual time=3.213..46.674 rows=2506 loops=1)
   Recheck Cond:
('0103207213010005002E47BC352BFC41C164910500DF8D4CC12E47BC75CCF941C164910500DF8D4CC12E47BC75CCF941C1649105002F8B4CC12E47BC352BFC41C1649105002F8B4CC12E47BC352BFC41C164910500DF8D4CC1'::geometry
&& st_geomfromewkb(pc_envelopeasbinary(points)))
   Filter:
_st_intersects('0103207213010005002E47BC352BFC41C164910500DF8D4CC12E47BC75CCF941C164910500DF8D4CC12E47BC75CCF941C1649105002F8B4CC12E47BC352BFC41C1649105002F8B4CC12E47BC352BFC41C164910500DF8D4CC1'::geometry,
st_geomfromewkb(pc_envelopeasbinary(points)))
   Heap Blocks: exact=36
   ->  Bitmap Index Scan on sthelens_pc_envelopegeometry_idx
(cost=0.00..382.75 rows=6196 width=0) (actual time=1.626..1.626
rows=2506 loops=1)
 Index Cond:
('0103207213010005002E47BC352BFC41C164910500DF8D4CC12E47BC75CCF941C164910500DF8D4CC12E47BC75CCF941C1649105002F8B4CC12E47BC352BFC41C1649105002F8B4CC12E47BC352BFC41C164910500DF8D4CC1'::geometry
&& st_geomfromewkb(pc_envelopeasbinary(points)))
 Planning time: 0.525 ms
 Execution time: 46.999 ms
(8 rows)


Note that the execution time is 46 ms when the query is wrapped in an
explain analyze (while it's 3 s when it's not!)


> What does pc_typmod_pcid() actually do?

It is one of the simplest functions of Pointcloud.


See

and
.


> 
> There are probably lots of other questions I could ask, but those questions
> are based on the fact that this _looks_ like a classic cache blowout. I.e.,
> the query runs quickly when all the related d> ata is in RAM, but is
> significantly slower when the data has to be pulled from disk. Answering
> the quesitons above will likely help to determine if my guess is correct.


I don't know. The query is fast if I run it first on the database
connection. And it is *always* very slow after the "select
pc_typmod_pcid(1)" query has run.


> 
> If my guess is correct, there are any number of potential ways to improve
> things: Add RAM to the machine, enlarge shared_buffers, put a geo index on
> public.sthelens.points so it doesn't have to scan the entire table; as a
> few examples.


It sounds like a performance issue to you, while it sounds like a bug to
me :)

> 
> Understanding what pc_typmod_pcid() actually does would help, but even 
> without that you can test things in a few ways. One would be to substitute
> a different query in your testing for select pc_typmod_pcid(1) that is
> known to push the contents of public.sthelens out of memory and see if
> the behavior is similar. Any count(*) query on some other large table
> would probably suffice. A better way would probalby be to install the
> pg_buffercache module and see what's actually in the cache at each step
> of the testing process.


I'll try to use pg_buffercache.



Thanks a lot for your response. That at least gives me courage in
debugging this :)


-- 
Éric Lemoine
Oslandia
+33 1 86 95 95 55
<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Eric Lemoine

> Looks like you also have postgis and pointcloud_postgis in mix. I would
> say this may get an answer sooner here:
> 
> http://lists.osgeo.org/mailman/listinfo/pgpointcloud/

I am actually one of the developers of the Pointcloud extension. I
haven't been able to debug this up to now.


-- 
Éric Lemoine
Oslandia
<>

signature.asc
Description: OpenPGP digital signature


[GENERAL] Performance issue with Pointcloud extension

2017-06-08 Thread Eric Lemoine
Hi

We have a rather strange performance issue with the Pointcloud extension
[*]. The issue/bug may be in the extension, but we don't know for sure
at this point. I'm writing to the list to hopefully get some guidance on
how to further debug this.

[*] 

A query takes around 250 ms when executed first on a database
connection. But it takes like 3 s when executed after a first very
simple Pointcloud query.

Below is a test-case with psql.

Case #1 (works normally):

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 236.423 ms


Case #2 (works abnormally):

psql (9.6.3)
Type "help" for help.

lopocs=# \timing
Timing is on.
lopocs=# select pc_typmod_pcid(1);
 pc_typmod_pcid

  1
(1 row)

Time: 4.917 ms
lopocs=# select points from public.sthelens where pc_intersects(points,
st_geomfromtext('polygon ((-2357334.41980829 -3742654.00016992,
-2356120.91980829 -3742654.00016992, -2356120.91980829
-3741278.00016992, -2357334.41980829 -3741278.00016992,
-2357334.41980829 -3742654.00016992))', 4978));
Time: 2987.491 ms


The query takes 236 ms in case #1, and 2987 ms in case #2! Huge difference.

Anyone has any idea where this performance drop may come from? The
problem may be in the Pointcloud in the extension, but I have no idea
where the bug may be.

Any idea? Any suggestion on how to debug this? This has been driving us
crazy for some time now.

Thanks.


-- 
Éric Lemoine
Oslandia
<>

signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] storing large files in database - performance

2017-05-19 Thread Eric Hill
I am pleased to report that with Merlin's suggestion of using the 
pg-large-object middleware, I have a test case now showing that I can write a 
25MB buffer from Node.js to Postgres in roughly 700 milliseconds.  Here is the 
JavaScript code, which is nearly verbatim from the example in the 
pg-large-object doc:

packages.testLargeObjects = function(callback) {
   var pgp = require('pg-promise')();
   var LargeObjectManager = require('pg-large-object').LargeObjectManager;
   var PassThrough = require('stream').PassThrough;

   var bufSize = 1024 * 1024 * 25;
   var buf = new Buffer(bufSize);
   buf.fill("pgrocks");

   var connInfo = {
  host:   'localhost',
  port:   5432,
  database:'mydb',
  user:   'postgres,
  password:'secret'
   };

   var db = pgp(connInfo);

   db.tx(function(tx) {
  const lObjMgr = new LargeObjectManager({pgPromise: tx});
  const bufferSize = 16384;

  return lObjMgr.createAndWritableStreamAsync(bufferSize)
 .then( ([oid, stream]) => {
let bufferStream = new PassThrough();
bufferStream.end(buf);
bufferStream.pipe(stream);
return new Promise(function(resolve, reject) {
   stream.on('finish', resolve);
   stream.on('error', reject);
});
 });
   })
   .then(function() {
  callback();
  pgp.end();
   })
   .catch(function(err) {
  callback(err);
  pgp.end();
   });
};

Thanks very much!

Eric

-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com] 

Another point, some googling turned up
https://www.npmjs.com/package/pg-large-object which is definitely something to 
consider trying.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-18 Thread Eric Hill
Thanks, Merlin - lots of good information here, and I had not yet stumbled 
across pg-large-object - I will look into it.

Eric

-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com] 
Sent: Thursday, May 18, 2017 9:49 AM
To: Eric Hill <eric.h...@jmp.com>
Cc: Thomas Kellerer <spam_ea...@gmx.net>; PostgreSQL General 
<pgsql-general@postgresql.org>
Subject: Re: storing large files in database - performance

EXTERNAL

On Thu, May 18, 2017 at 7:34 AM, Eric Hill <eric.h...@jmp.com> wrote:
> I would be thrilled to get 76 MB per second, and it is comforting to know 
> that we have that as a rough upper bound on performance.  I've got work to do 
> to figure out how to approach that upper bound from Node.js.
>
> In the meantime, I've been looking at performance on the read side.  For 
> that, I can bypass all my Node.js layers and just run a query from pgAdmin 4. 
>  I ran this query, where indexFile.contents for the row in question is 25MB 
> in size.  The query itself took 4 seconds in pgAdmin 4.  Better than the 12 
> seconds I'm getting in Node.js, but still on the order of 6MB per second, not 
> 76.  Do you suppose pgAdmin 4 and I are doing similarly inefficient things in 
> querying bytea values?

Probably.  I haven't spent a lot of time with pgadmin 4 so I'm not entirely 
sure.  If you want a quick and dirty comparison, try using running your query 
in psql unaligned mode for a comaprison point.  You can also do \copy BINARY in 
the case of byte transfers.

The basic problem is not really the database, it's that database interaction 
APIs tend not to be directed to this kind of problem.
The big picture issues are:

*) Driver overhead marshaling from wire format to managed types

*) Driver overhead for memory management

*) Wire format issues.  Certain types are *much* faster with the binary wire 
format and are additionally much more memory efficient.
Your bytea transfers are probably being serialized to text and back in both 
directions which is very wasteful, especially for very large transfers since 
it's wasteful in terms of memory.

If I were to seriously look at node.js performance, my rough thinking is that 
I'd want to be setting up the javascript variables directly in C somehow using 
plv8 internal routines.  Short of that, I would probably be querying all data 
out of postgres in json rather than serializing individual fields (which is 
what I generally do in practice).

Another point, some googling turned up
https://www.npmjs.com/package/pg-large-object which is definitely something to 
consider trying.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-18 Thread Eric Hill
My apologies: I said I ran "this query" but failed to include the query.  It 
was merely this:

SELECT "indexFile"."_id", "indexFile"."contents"
FROM "mySchema"."indexFiles" AS "indexFile"
WHERE "indexFile"."_id" = '591c609bb56d0849404e4720';

Eric

-Original Message-
From: Eric Hill [mailto:eric.h...@jmp.com] 
Sent: Thursday, May 18, 2017 8:35 AM
To: Merlin Moncure <mmonc...@gmail.com>; Thomas Kellerer <spam_ea...@gmx.net>
Cc: PostgreSQL General <pgsql-general@postgresql.org>
Subject: Re: storing large files in database - performance

I would be thrilled to get 76 MB per second, and it is comforting to know that 
we have that as a rough upper bound on performance.  I've got work to do to 
figure out how to approach that upper bound from Node.js.  

In the meantime, I've been looking at performance on the read side.  For that, 
I can bypass all my Node.js layers and just run a query from pgAdmin 4.  I ran 
this query, where indexFile.contents for the row in question is 25MB in size.  
The query itself took 4 seconds in pgAdmin 4.  Better than the 12 seconds I'm 
getting in Node.js, but still on the order of 6MB per second, not 76.  Do you 
suppose pgAdmin 4 and I are doing similarly inefficient things in querying 
bytea values?

Thanks,

Eric

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-18 Thread Eric Hill
I would be thrilled to get 76 MB per second, and it is comforting to know that 
we have that as a rough upper bound on performance.  I've got work to do to 
figure out how to approach that upper bound from Node.js.  

In the meantime, I've been looking at performance on the read side.  For that, 
I can bypass all my Node.js layers and just run a query from pgAdmin 4.  I ran 
this query, where indexFile.contents for the row in question is 25MB in size.  
The query itself took 4 seconds in pgAdmin 4.  Better than the 12 seconds I'm 
getting in Node.js, but still on the order of 6MB per second, not 76.  Do you 
suppose pgAdmin 4 and I are doing similarly inefficient things in querying 
bytea values?

Thanks,

Eric

-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com] 
Sent: Wednesday, May 17, 2017 10:21 AM
To: Thomas Kellerer <spam_ea...@gmx.net>
Cc: PostgreSQL General <pgsql-general@postgresql.org>
Subject: Re: storing large files in database - performance

On Tue, May 16, 2017 at 9:51 AM, Thomas Kellerer <spam_ea...@gmx.net> wrote:
> John R Pierce schrieb am 16.05.2017 um 16:44:
>> On 5/16/2017 7:35 AM, Thomas Kellerer wrote:
>>> When my (JDBC based) SQL client and the database server are on the same 
>>> computer...
>>
>> node.js is Javascript, not java w/ jdbc
>
> I know that.
>
> I mentioned JDBC so that it's clear that the timings were done using a 
> different technology
>
> Maybe it's Node.js or the JavaScript "driver" that causes the problems.

When writing large objects to the database, method of transmission will very 
much determine performance until you start hitting the natural boundaries 
imposed by the database.

via (hastily written):
#include "libpqtypes.h"
#include "stdlib.h"
#include "string.h"

int main()
{
  int s = 1024 * 1024 * 256;

  char *p = malloc(s);
  memset(p, 'x', s);
  p[s-1] = 0;

  PGconn *conn = PQconnectdb("");
  PQinitTypes(conn);

  PGresult *res = PQexecf(conn, "insert into foo values(1,%text)", p);

  if(!res)
fprintf(stderr, "*ERROR: %s\n", PQgeterror());

  PQclear(res);
}

mmoncure@mernix2 09:13 AM /tmp$ gcc -otest test.c -lpq -lpqtypes -I 
/home/mmoncure/src/libpqtypes-1.5.1/src -I /home/mmoncure/pg94/include/ -L 
/home/mmoncure/src/libpqtypes-1.5.1/.libs/
mmoncure@mernix2 09:13 AM /tmp$ psql -c "create table foo(i int, f text)"
CREATE TABLE
mmoncure@mernix2 09:13 AM /tmp$ psql -c "alter table foo alter f set storage 
external"
ALTER TABLE
mmoncure@mernix2 09:14 AM /tmp$ time
LD_LIBRARY_PATH=/home/mmoncure/src/libpqtypes-1.5.1/.libs ./test

real 0m3.245s
user 0m0.092s
sys 0m0.232s
mmoncure@mernix2 09:15 AM /tmp$ psql -c "select 
pg_size_pretty(pg_table_size('foo'))"
 pg_size_pretty

 266 MB
(1 row)


...that's over 76mb/sec (to local server) for 256mb transfer.  That's
pretty good I think.   We have a 1GB barrier on bytea/text and (at
least in C, with certain reasonable precautions) you can work comfortably under 
that limit.  There might be other better strategies but it can be done.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-16 Thread Eric Hill
OK, thanks very much.  It seems like my process is somehow flawed.  I'll try 
removing some layers and see if I can figure out what is killing the 
performance.

Eric



>
> Do these numbers surprise you?  Are these files just too large for 
> storage in PostgreSQL to be practical?  Could there be something about 
> my methodology that is slowing things down?

Yes, it does surprise me. I just tested inserting an 11MB file using
psycopg2(Python) and it was less then a second.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] storing large files in database - performance

2017-05-16 Thread Eric Hill
Hey,

I searched and found a few discussions of storing large files in the database 
in the archives, but none that specifically address performance and how large 
of files can realistically be stored in the database.

I have a node.js application using PostgreSQL to store uploaded files.  The 
column in which I am storing the file contents is of type "bytea" with 
"Storage" type set to "EXTENDED".  Storing a 12.5 MB file is taking 10 seconds, 
and storing a 25MB file is taking 37 seconds.  Two notable things about those 
numbers:  It seems like a long time, and the time seems to grow exponentially 
with file size rather than linearly.

Do these numbers surprise you?  Are these files just too large for storage in 
PostgreSQL to be practical?  Could there be something about my methodology that 
is slowing things down?

I do have the Sequelize ORM and the pg driver in between my code and the 
database.

Thanks,

Eric


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Eric Schwarzenbach

On 12/06/2016 02:40 PM, Joshua D. Drake wrote:

On 12/06/2016 11:12 AM, Eric Schwarzenbach wrote:

On 12/06/2016 01:34 PM, Joshua D. Drake wrote:

On 12/06/2016 10:30 AM, Rich Shepard wrote:


  My thinking is to not store these documents in the database, but to
store
them in subdirectories outside the database.

  Your thoughts?


Due to the widely variable size of a PDF document, I would say no. I
would store the metadata and file location.



Can you elaborate on this? Why is the variable size an issue?


Because it will use at least that size in memory to deliver the 
document to you. Consider a 100MB PDF (not at all uncommon), now 
imagine 40 connections requesting that PDF.


Are you sure the whole thing necessarily gets pulled into memory? JDBC 
and ODBC support streaming on their BLOB interfaces and isn't the whole 
point of this that an application can stream large files a chunk at a 
time, the same way it would from the file system? Of course if the db 
engine always pulls the whole thing into memory to work with it 
regardless of the API, that's another thing, but that wouldn't seem like 
a very good design, and I have more faith in the PostgreSQL developers 
than that...but I'd certainly like to know for sure.


Cheers,

Eric




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Eric Schwarzenbach

On 12/06/2016 01:34 PM, Joshua D. Drake wrote:

On 12/06/2016 10:30 AM, Rich Shepard wrote:

  My thinking is to not store these documents in the database, but to 
store

them in subdirectories outside the database.

  Your thoughts?


Due to the widely variable size of a PDF document, I would say no. I 
would store the metadata and file location.



Can you elaborate on this? Why is the variable size an issue? Are you 
assuming the files go into the same table as the rest of the data? (They 
certainly don't have to, and I would assume that not to be the smartest 
design.)



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PDF files: to store in database or not

2016-12-06 Thread Eric Schwarzenbach

On 12/06/2016 01:30 PM, Rich Shepard wrote:
  With no experience of storing binary data in a bytea column I don't 
know

when its use is appropriate. I suspect that for an application I'm
developing it would be better to store row-related documents outside the
database, and want to learn if that is the appropriate approach.

  Consider an application that manages a fleet of vehicles. There's a
Vehicles table with information on each one (perhaps make, model, VIN, 
year
of purchase) and a Services table. There are many PDF documents 
associated
with each row in the tables: purchase contract, insurance form, 
service and

maintenance records, etc.

  My thinking is to not store these documents in the database, but to 
store

them in subdirectories outside the database.

  Your thoughts?

Rich
I'd also be interested in answers to this that are give specific pros 
and cons, and not in terms of "its better to do this than that."
What's "better" depends on how much you value the various pros and the 
cons.


One of the pros of keeping them in the database is ease of protecting 
adds and updates to the files and their related data with a transaction 
and being able to have system where it iss pretty much impossible for 
the documents to ever be out of sync with the related data.


I maintain some systems that do keep the documents outside of the 
database, and the application code maintains the transactional integrity 
of the files and data, and for the most part we don't have integrity 
problems. In the worst of an add or update operation being interrupted 
by a system crash or unexpected error, we have a new document saved but 
the data about this document has not been written to the database and it 
is as if that operation never happened. The file may really be there but 
the system does not "know about it." This works even for updates because 
our system versions documents and the old version is not written over, 
there is simply a new version that the system never "knows" about. 
Without versioning this would be more of a problem, and you would 
probably need to protect yourself with code that does something like 
temporarily keeping the last version of a file during an update and 
switching over the metadata to reference the new document only at the 
very last operation in the transaction.


We also have the potential of the database not matching the file store 
when a system is migrated or "cloned." We are very careful about this, 
but we've at least once had a case where a client's IT depart screwed it 
up, and got a mismatched system to which they started writing new data. 
Luckily this was a test or staging system and no production data was lost.


I've often wondered if we'd have been better off storing the files in 
the database. This design decision was made some years ago, and our 
concerns around this had to do with performance, but I don't know that 
we had any real data that this should have been a concern, and I suspect 
you could ameliorate if not eliminate this as an issue by careful 
design. I'd loved to hear this idea confirmed or debunked by someone who 
has more expertise (and ideally, done actual testing).


Cheers,

Eric


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to manually force a transaction wraparound

2016-05-05 Thread Eric Ridge
On Fri, Apr 29, 2016 at 10:16 PM Thomas Munro <thomas.mu...@enterprisedb.com>
wrote:

> On Sat, Apr 30, 2016 at 10:48 AM, Eric Ridge <eeb...@gmail.com> wrote:
> > I want to force my database to wraparound, just to see what happens.  How
> > can I do this without consuming a few billion transactions?
>
> Take a look at the script repro-bogus-subtrans-error-wraparound.sh
> from this email:
>
>
> http://www.postgresql.org/message-id/CAEepm=3z0eolpo5wtuwsem38kbq+gjp8xxiuljkuqpm-sw7...@mail.gmail.com
>
> That used pg_resetxlog -x $XID $PGDATA, but needed to do several hops
> stop/pg_resetxlog/start hops to get all the way around the xid clock.


Thanks Thomas.  I ended up figuring out something similar after I read the
docs on pg_resetxlog.

It did something interesting to two of my local databases, but I was able
to figure out what I wanted to know.

Thanks again!

eric


Re: [GENERAL] Proper relational database?

2016-04-22 Thread Eric Schwarzenbach

On 04/22/2016 06:21 AM, David Goodenough wrote:

On Thursday 21 April 2016 13:36:54 Guyren Howe wrote:

Anyone familiar with the issue would have to say that the tech world would
be a significantly better place if IBM had developed a real relational
database with an elegant query language rather than the awful camel of a
thing that is SQL.

If I had a few $million to spend in a philanthropical manner, I would hire
some of the best PG devs to develop a proper relational database server.
Probably a query language that expressed the relational algebra in a
scheme-like syntax, and the storage model would be properly relational (eg
no duplicate rows).

It's an enormous tragedy that all the development effort that has gone into
NoSQL database has pretty much all gotten it wrong: by all means throw out
SQL, but not the relational model with it. They're all just rehashing the
debate over hierarchical storage from the 70s. Comp Sci courses should
feature a history class.

It's a bit odd to me that someone isn't working on such a thing.

Just curious what folks here have to say…

Well when IBM were first developing relational databases there were two
different teams.  One in California which produced System-R which became
what we now know as DB2 and spawned SQL, and the other in Peterlee in
the UK which was called PRTV (the Peterlee Relational Test Vehicle).  PRTV
rather died but bits of it survived.  In particular it was the first to system
to include a relational optimiser.  You can find some details on the PRTV
page in Wikipedia.

It was written in PL/1, although it also used some modified microcode
and therefore some assembler.

It never appeared as a product, but there was a geographical system
which built on top of it which was if I recall corrected used by the Greater
London Council and Central Region Scotland, which did something of
what postgis does for PostgreSQL.

According to the Wikipedia page it did have a language (ISBL) but from what
I recall (and it was nearly 40 years ago) there were a series of PL/1
function calls we used rather than encoding the request as a string
as SQL systems require.

The IBM centre in Peterlee was closed, and the lab moved to Winchester
where I think it still resides.
One of the people involved in that was Hugh Darwen, who is one of the 
authors of The Third Manifesto, which is an attempt to define what a 
properly relational language and system should look like. So you could 
say the experience of ISBL vs SQL has been folded into that effort.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How jsonb updates affect GIN indexes

2016-02-27 Thread Eric Mortensen
Hi

When a jsonb column is updated, as far as I understand the entire column is
updated, even though perhaps only one of the keys has a modified value.

My question is: if I have a GIN index on that column, will PG visit every
key and update each index entry, or will it visit every key and update each
index entry only if the value is different, or will it only visit those
keys that have been modified?

I have a jsonb column where I only need to index two keys, and was
wondering if there is any benefit in terms of update efficiency to having
two indexes, one for each key, rather than just creating a GIN index on the
whole (rather large) column.

Thanks,
Eric


Re: [GENERAL] Domain check constraint not honored?

2015-10-30 Thread Eric Schwarzenbach

Thank you! (Slapping head)
Your regexp seems to do the trick.

On 10/29/2015 01:49 PM, Rob Sargent wrote:

On 10/29/2015 11:41 AM, Eric Schwarzenbach wrote:
I have created a custom type as a domain based on text, which adds a 
check constraint using a regexp to limit it to containing digits and 
'.'. However I am finding I can add values with other characters to a 
column of this type. Is this to be expected for some reason?


Or alternately, did I define the constraint wrong somehow? It is 
defined thus:


CREATE DOMAIN hierpath AS text
CHECK(
   VALUE ~ '[0-9.]+'
);

Checking the docs I'm pretty sure the '.' doesn't need escaping but I 
also tried a test leaving it out ( '[0-9]+') and the result is the 
same. It lets me store letters in a column defined to be of this type.


The version() function tells me
"PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc 
(Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit"


Thanks,
Eric


I think you regexp is too weak.  So long as the value has a digit or 
period, it's good.

'^[0-9.]+$' might work




Re: [GENERAL] Domain check constraint not honored?

2015-10-30 Thread Eric Schwarzenbach

On 10/30/2015 09:53 AM, Jim Nasby wrote:

On 10/29/15 5:29 PM, Eric Schwarzenbach wrote:

I'm just now converting that path to use a custom domain (along with
custom operators) instead of just being a string. (The custom operators
allow the paths to be sorted properly without each segment needing to be
filled with zeros to a fixed length.) (Also FWIW, the latest version of
this regexp is now '^([0-9]+.)*[0-9]+$')


Have you looked at using int[]? It wouldn't be hard to go between that 
and the string representation using string_to_array() and 
array_to_string(). There's also a chance that eventually you'd be able 
to do FKs on it.
Do you mean making the column int[] and converting to string if needed, 
or converting the string column to int[] for the purposes of the 
ordering algorithm?


I did consider making the column int[] instead of a string, and it would 
probably be slightly more efficient in a few ways. My main hesitations 
were having to revisit the code that puts together this path, and 
compatibility (at the moment we're only using PostgreSQL but we've had 
to run on other databases for certain clients in the past, and in theory 
are open to that in the future). I realize the compatibility concern is 
a little humorous in light of having gone down the 
custom-operator-for-sorting route, but I can always fall back to 0 padding.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Domain check constraint not honored?

2015-10-29 Thread Eric Schwarzenbach
I have created a custom type as a domain based on text, which adds a 
check constraint using a regexp to limit it to containing digits and 
'.'. However I am finding I can add values with other characters to a 
column of this type. Is this to be expected for some reason?


Or alternately, did I define the constraint wrong somehow? It is defined 
thus:


CREATE DOMAIN hierpath AS text
CHECK(
   VALUE ~ '[0-9.]+'
);

Checking the docs I'm pretty sure the '.' doesn't need escaping but I 
also tried a test leaving it out ( '[0-9]+') and the result is the same. 
It lets me store letters in a column defined to be of this type.


The version() function tells me
"PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 
4.8.2-19ubuntu1) 4.8.2, 64-bit"


Thanks,
Eric


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Domain check constraint not honored?

2015-10-29 Thread Eric Schwarzenbach


Thank you! (Slapping head)
Your regexp seems to do the trick.

On 10/29/2015 01:49 PM, Rob Sargent wrote:

On 10/29/2015 11:41 AM, Eric Schwarzenbach wrote:
I have created a custom type as a domain based on text, which adds a 
check constraint using a regexp to limit it to containing digits and 
'.'. However I am finding I can add values with other characters to a 
column of this type. Is this to be expected for some reason?


Or alternately, did I define the constraint wrong somehow? It is 
defined thus:


CREATE DOMAIN hierpath AS text
CHECK(
   VALUE ~ '[0-9.]+'
);

Checking the docs I'm pretty sure the '.' doesn't need escaping but I 
also tried a test leaving it out ( '[0-9]+') and the result is the 
same. It lets me store letters in a column defined to be of this type.


The version() function tells me
"PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc 
(Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit"


Thanks,
Eric


I think you regexp is too weak.  So long as the value has a digit or 
period, it's good.

'^[0-9.]+$' might work




Re: [GENERAL] Domain check constraint not honored?

2015-10-29 Thread Eric Schwarzenbach

On 10/29/2015 03:44 PM, Alvaro Herrera wrote:

Rob Sargent wrote:
  

Also thought I should mention that there is an ip address type if that's
what you're trying to accomplish.

Looking at the domain name, I wonder whether contrib/ltree would be
helpful.
Very observant! This is indeed part of a hierarchical data solution. 
Thanks for the suggestion, but this solution has been in place and 
working for a few years already. I'm not positive, but I think I may 
have looked at ltree when I first implemented it, but decided against it 
in favor of a transitive closure table, augmented with this path for 
sorting.
(I do sometimes wonder whether the transitive closure table is worth it 
vs just a materialized path.)


I'm just now converting that path to use a custom domain (along with 
custom operators) instead of just being a string. (The custom operators 
allow the paths to be sorted properly without each segment needing to be 
filled with zeros to a fixed length.) (Also FWIW, the latest version of 
this regexp is now '^([0-9]+.)*[0-9]+$')




Cheers,

Eric


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] express composite type literal as text

2015-02-22 Thread Eric Hanson
Hi,

How do I express a composite type literal as text?

I'm trying to use a composite type in a WHERE clause.  The examples in the
docs

http://www.postgresql.org/docs/9.4/static/rowtypes.html

say:

CREATE TYPE complex AS (
r   double precision,
i   double precision
);

CREATE TYPE inventory_item AS (
nametext,
supplier_id integer,
price   numeric
);

CREATE TABLE on_hand (
item  inventory_item,
count integer
);

INSERT INTO on_hand VALUES (ROW('fuzzy dice', 42, 1.99), 1000);


Now I want to query for that row, specifying the item in the WHERE clause.
I can't use the ROW() notation, because all values need to be represented
as text over a REST api.  But I can't seem to get the text-based syntax to
work:

select * from on_hand where item='(fuzzy dice,42,1.99)';

yeilds

ERROR:  input of anonymous composite types is not implemented

I've tried various forms of quote escaping and dollar quoting as the docs
suggest, but they all produce that same error:

select * from on_hand where item='(\\\fuzzy dice\\\,42,1.99)';
select * from on_hand where item=$$(fuzzy dice,42,1.99)$$;


Thanks,
Eric


Re: [GENERAL] express composite type literal as text

2015-02-22 Thread Eric Hanson
On Sun, Feb 22, 2015 at 11:42 AM, Adrian Klaver adrian.kla...@aklaver.com
 wrote:


 test= select * from on_hand where item = '(fuzzy
 dice,42,1.99)'::inventory_item;

   item  | count
 +---
  (fuzzy dice,42,1.99) |  1000
 (1 row)


So, you have to do the explicit cast?  It looks like it.  That's not ideal,
we have relied on the assumption that all values (except for NULL) can be
represented as text strings, and PostgreSQL will auto-cast the text to the
appropriate type.  Is this case just an exception to a rule that is
generally true and aimed for, or is that just not a good assumption?

Thanks,
Eric

On Sun, Feb 22, 2015 at 11:42 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:

 On 02/22/2015 10:07 AM, Eric Hanson wrote:
  Hi,
 
  How do I express a composite type literal as text?
 
  I'm trying to use a composite type in a WHERE clause.  The examples in
  the docs
 
  http://www.postgresql.org/docs/9.4/static/rowtypes.html
 
  say:
 
  CREATE TYPE complex AS (
   r   double precision,
   i   double precision
  );
 
  CREATE TYPE inventory_item AS (
   nametext,
   supplier_id integer,
   price   numeric
  );
 
  CREATE TABLE on_hand (
   item  inventory_item,
   count integer
  );
 
  INSERT INTO on_hand VALUES (ROW('fuzzy dice', 42, 1.99), 1000);
 
 
  Now I want to query for that row, specifying the item in the WHERE
  clause.  I can't use the ROW() notation, because all values need to be
  represented as text over a REST api.  But I can't seem to get the
  text-based syntax to work:
 
  select * from on_hand where item='(fuzzy dice,42,1.99)';
 
  yeilds
 
  ERROR:  input of anonymous composite types is not implemented
 
  I've tried various forms of quote escaping and dollar quoting as the
  docs suggest, but they all produce that same error:
 
  select * from on_hand where item='(\\\fuzzy dice\\\,42,1.99)';
  select * from on_hand where item=$$(fuzzy dice,42,1.99)$$;

 From here:

 http://www.postgresql.org/docs/9.3/static/sql-expressions.html

 4.2.13. Row Constructors

 So,

 test= INSERT INTO on_hand VALUES (ROW('bobble dog', 42, 5.99), 1000);
 INSERT 0 1

 test= select * from on_hand ;
   item  | count
 +---
  (fuzzy dice,42,1.99) |  1000
  (bobble dog,42,5.99) |  1000
 (2 rows)



 test= select * from on_hand where item = '(fuzzy
 dice,42,1.99)'::inventory_item;

   item  | count
 +---
  (fuzzy dice,42,1.99) |  1000
 (1 row)




 
 
  Thanks,
  Eric


 --
 Adrian Klaver
 adrian.kla...@aklaver.com


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] express composite type literal as text

2015-02-22 Thread Eric Hanson
On Sun, Feb 22, 2015 at 11:47 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Eric Hanson elhan...@gmail.com writes:
  How do I express a composite type literal as text?

 The rules are given in the manual ...

  I can't use the ROW() notation, because all values need to be represented
  as text over a REST api.  But I can't seem to get the text-based syntax
 to
  work:

  select * from on_hand where item='(fuzzy dice,42,1.99)';

  yeilds

  ERROR:  input of anonymous composite types is not implemented

 That message isn't telling you that you've got a problem with the data
 syntax, it's telling you that you need to cast the literal to a named
 composite data type.  This works:

 # select * from on_hand where item='(fuzzy
 dice,42,1.99)'::inventory_item;
   item  | count
 +---
  (fuzzy dice,42,1.99) |  1000
 (1 row)

 Now, I'm not too sure *why* it's making you do that --- seems like the
 default assumption ought to be that the literal is the same type as
 the variable it's being compared to.  Perhaps there's a bug in there,
 or perhaps there's no easy way to avoid this requirement.  But that's
 what the requirement is today.


Got it.  Ok, I'm reporting this as a bug.  Is this a bug?  Being able to
always express literals as text is a really valuable assumption to be able
to rely on.

Thanks,
Eric


Re: [GENERAL] express composite type literal as text

2015-02-22 Thread Eric Hanson
On Sun, Feb 22, 2015 at 11:42 AM, Adrian Klaver adrian.kla...@aklaver.com
 wrote:


 test= select * from on_hand where item = '(fuzzy
 dice,42,1.99)'::inventory_item;

   item  | count
 +---
  (fuzzy dice,42,1.99) |  1000
 (1 row)


So, you have to do the explicit cast?  It looks like it.  That's not ideal,
we have relied on the assumption that all values (except for NULL) can be
represented as text strings, and PostgreSQL will auto-cast the text to the
appropriate type.  Is this case just an exception to a rule that is
generally true and aimed for, or is that just not a good assumption?

Thanks,
Eric


Re: [BUGS] [GENERAL] express composite type literal as text

2015-02-22 Thread Eric Hanson
On Sun, Feb 22, 2015 at 12:56 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Well, it's an unimplemented feature anyway.  I poked into it and noticed
 that the equivalent case for arrays works, because that operator is
 anyarray = anyarray.  enforce_generic_type_consistency() observes that
 we have an unknown literal that's going to be passed to an anyarray
 function argument, so it resolves anyarray as the actual array type
 determined from the other anyarray argument position.

 There's no corresponding behavior for RECORD, because RECORD is not
 treated as a polymorphic type for this purpose -- in particular, there is
 no built-in assumption that the two arguments passed to record_eq(record,
 record) should be the same record type.  (And, indeed, it looks like
 record_eq goes to some effort to cope with them not being identical;
 this may be essential to make dropped-column cases work desirably.)

 Conceivably we could invent an ANYRECORD polymorphic type, extend the
 polymorphic type logic to deal with that, and redefine record_eq as taking
 (anyrecord, anyrecord).  However that'd likely break some scenarios along
 with fixing this one.  It'd require some research to figure out what's
 the least painful fix.  In any case, anything involving a new datatype is
 certainly not going to be a back-patchable bug fix.

 Given that it's worked like this pretty much forever, and there have been
 few complaints, it's probably not going to get to the front of anyone's
 to-do list real soon ...


Ok.  Thanks for the info.  I like the ANYRECORD idea.

As for the behavior, consider me logging one complaint. :)  The consequence
is that you can't use composite types in a REST interface or any other
string-based interface, unless the POST handler look up the type of all
columns and checks for the special case, to add the explicit cast.  It adds
a lot of overhead that is 99% unnecessary.

Thanks,
Eric


[GENERAL] express composite type literal as text

2015-02-22 Thread Eric Hanson
Hi,

I'm trying to use a composite type in a WHERE clause, as described here:

http://www.postgresql.org/docs/9.4/static/rowtypes.html

Just pasting in the examples I get:

CREATE TYPE complex AS (
r   double precision,
i   double precision
);

CREATE TYPE inventory_item AS (
nametext,
supplier_id integer,
price   numeric
);

CREATE TABLE on_hand (
item  inventory_item,
count integer
);

INSERT INTO on_hand VALUES (ROW('fuzzy dice', 42, 1.99), 1000);


Now I want to query for that row, specifying the item in the WHERE clause.
I can't use the ROW() notation, because all values need to be represented
as text over a REST api.  But I can't seem to get the text-based syntax to
work:

select * from on_hand where item='(fuzzy dice,42,1.99)';

yeilds

ERROR:  input of anonymous composite types is not implemented

I've tried various forms of quote escaping and dollar quoting as the docs
suggest, but they all produce that same error:

select * from on_hand where item='(\\\fuzzy dice\\\,42,1.99)';
select * from on_hand where item=$$(fuzzy dice,42,1.99)$$;

How do I express a composite type literal as text?

Thanks,
Eric


[GENERAL] express composite type literal as text

2015-02-22 Thread Eric Hanson
Hi,

I'm trying to use a composite type in a WHERE clause, as described here:

http://www.postgresql.org/docs/9.4/static/rowtypes.html

Just pasting in the examples I get:

CREATE TYPE complex AS (
r   double precision,
i   double precision
);

CREATE TYPE inventory_item AS (
nametext,
supplier_id integer,
price   numeric
);

CREATE TABLE on_hand (
item  inventory_item,
count integer
);

INSERT INTO on_hand VALUES (ROW('fuzzy dice', 42, 1.99), 1000);



Now I want to query for that row, specifying the item in the WHERE clause.
I can't use the ROW() notation, because all values need to be passed in as
text.  But I can't seem to get the text-based syntax to work:

select * from on_hand where item='(fuzzy dice,42,1.99)';

yeilds

ERROR:  input of anonymous composite types is not implemented

I've tried various forms of quote escaping and dollar quoting as the docs
suggest, but they all produce that same error:

select * from on_hand where item='(fuzzy dice,42,1.99)';
select * from on_hand where item='(\\fuzzy dice\\,42,1.99)';
select * from on_hand where item=$$(fuzzy dice,42,1.99)$$;

How do I express a composite type literal as text?  Any workarounds?  The
text-based representation of values is a hard requirement in my project...

Thanks,
Eric


Re: Fwd: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-12 Thread Eric Svenson
Hi Adrian,

so finally I have a workaround which is ok for me. When I seperate the
tables and the data (using the -a and -s switch from pg_dump) into 2 sql
backup files, everything works ok on the problem-VM.

I try to investigate further in the coming weeks, I´m on holiday next week.

Regards and thanks for your support,
Eric

2014-12-10 15:27 GMT+01:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 12/10/2014 01:32 AM, Eric Svenson wrote:

 So, one more success...

 I have taken a part of the backup SQL file which fills the table

 COPY dev_my_settings (.) from stdin;
 12345  text   text   0   123.345345

 This file ALONE works! (without changing ANYTHING!)


 Hmm, almost like the encoding/locale is changing in the complete file.


 So if I run the first (huge) SQL file and then the second, which fills
 the dev_my_settings table, everything is ok.


 FYI if you do a pg_dump using the custom format(-Fc) you have more control
 over the restore. You can run pg_restore -l against the dump file to get a
 Table of Contents(TOC), which you can edit by commenting out items you do
 not want to restore and then feed back to pg_restore via -L. You also have
 the option to 'restore' all or part of the custom file to a text file using
 the -f option. The details can be found here:

 http://www.postgresql.org/docs/9.3/interactive/app-pgrestore.html


 2014-12-10 10:23 GMT+01:00 Eric Svenson esvenso...@googlemail.com



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: Fwd: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-10 Thread Eric Svenson
So, one more success...

I have taken a part of the backup SQL file which fills the table

COPY dev_my_settings (.) from stdin;
12345  text   text   0   123.345345

This file ALONE works! (without changing ANYTHING!)

So if I run the first (huge) SQL file and then the second, which fills the
dev_my_settings table, everything is ok.

2014-12-10 10:23 GMT+01:00 Eric Svenson esvenso...@googlemail.com:

  The restore left you with two empty tables. What happens if you log into
 Postgres  via psql and then INSERT one set of values containing floats
 into say, dev_my_settings?

 SUCCESS! This works OK!

 INSERT INTO dev_my_settings(123, 'test', 'test', 'test', 123, 123.345);

 Value 123.345 can be read from pg_admin.

 SHOW ALL shows English_United States.1252 for all lc_ settings (but I have
 tried it with German and C locale with same results)

 Regards, Eric

 2014-12-08 22:57 GMT+01:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 12/08/2014 06:53 AM, Eric Svenson wrote:

 Hi Adrian,

 I try to get access to the non-VM machine, at the moment access is not
 possible for me unfortunately.

 You are right, there are more tables in the database which are restored
 correctly but these tables do NOT contain float values. These two tables
 are the only tables in the database which contain floats.

 The errors occur with the first float in the table, the restore process
 seems to terminate with that table and seems to continue with the next
 table. The result are completely empty tables for dev_my_settings and
 file_item.

 There are float values in the table which can be viewed with pg_admin.

 The table definitions for dev_my_settings and file_item contain lots of
 BIGINTS, smallints and integers, and several double precision values.
 All other tables do not contain any double precision values.


 Alright a chance to think some more.

 So:

 The restore left you with two empty tables. What happens if you log into
 Postgres via psql and then INSERT one set of values containing floats into
 say, dev_my_settings?

 While you are in psql, what does SHOW ALL display for the lc_* settings?

 On the Windows server where the Postgres server is running what does SET
 show from the command line?



 Regards,
 Eric




 --
 Adrian Klaver
 adrian.kla...@aklaver.com





Re: Fwd: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-10 Thread Eric Svenson
 The restore left you with two empty tables. What happens if you log into
Postgres  via psql and then INSERT one set of values containing floats
into say, dev_my_settings?

SUCCESS! This works OK!

INSERT INTO dev_my_settings(123, 'test', 'test', 'test', 123, 123.345);

Value 123.345 can be read from pg_admin.

SHOW ALL shows English_United States.1252 for all lc_ settings (but I have
tried it with German and C locale with same results)

Regards, Eric

2014-12-08 22:57 GMT+01:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 12/08/2014 06:53 AM, Eric Svenson wrote:

 Hi Adrian,

 I try to get access to the non-VM machine, at the moment access is not
 possible for me unfortunately.

 You are right, there are more tables in the database which are restored
 correctly but these tables do NOT contain float values. These two tables
 are the only tables in the database which contain floats.

 The errors occur with the first float in the table, the restore process
 seems to terminate with that table and seems to continue with the next
 table. The result are completely empty tables for dev_my_settings and
 file_item.

 There are float values in the table which can be viewed with pg_admin.

 The table definitions for dev_my_settings and file_item contain lots of
 BIGINTS, smallints and integers, and several double precision values.
 All other tables do not contain any double precision values.


 Alright a chance to think some more.

 So:

 The restore left you with two empty tables. What happens if you log into
 Postgres via psql and then INSERT one set of values containing floats into
 say, dev_my_settings?

 While you are in psql, what does SHOW ALL display for the lc_* settings?

 On the Windows server where the Postgres server is running what does SET
 show from the command line?



 Regards,
 Eric




 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: Fwd: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-09 Thread Eric Svenson
Hi Adrian,

I try to get access to the non-VM machine, at the moment access is not
possible for me unfortunately.

You are right, there are more tables in the database which are restored
correctly but these tables do NOT contain float values. These two tables
are the only tables in the database which contain floats.

The errors occur with the first float in the table, the restore process
seems to terminate with that table and seems to continue with the next
table. The result are completely empty tables for dev_my_settings and
file_item.

There are float values in the table which can be viewed with pg_admin.

The table definitions for dev_my_settings and file_item contain lots of
BIGINTS, smallints and integers, and several double precision values. All
other tables do not contain any double precision values.

Regards,
Eric


2014-12-08 15:22 GMT+01:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 12/08/2014 12:21 AM, Eric Svenson wrote:

 Hi Adrian,

 here are the results of today:

   How where the Postgres instances installed?
 From a package?

 Yes. It is Version 9.2 for Windows, comment of the package is
 The PostgreSQL RDBMS, version 9.2, packaged by EnterpriseDB


Compiled and if so with what compiler and what settings?
 No.


   Use --inserts with pg_dump to get INSERT statements instead of a COPY
 and then feed to psql.
   This will slow the process down, so I would try with a small sample
 set.

 Much slower, but still the same error:

 (part of the output)

 INSERT 50415934 1
 INSERT 50415935 1
 INSERT 50415936 1
 INSERT 50415937 1
 INSERT 50415938 1
 INSERT 50415939 1
 ERROR:  invalid input syntax for type double precision:
 0.10001

   Do pg_dump -Fc and then use pg_restore.

 pg_restore: [archiver (db)] Error while PROCESSING TOC:
 pg_restore: [archiver (db)] Error from TOC entry 2558; 0 5161040 TABLE
 DATA dev_my_settings my_Database
 pg_restore: [archiver (db)] COPY failed for table dev_my_settings:
 ERROR:  invalid input syntax
 for type double precision: 0.10001
 CONTEXT:  COPY dev_meas_settings, line 718, column dms_dble_value:
 0.10001
 pg_restore: [archiver (db)] Error from TOC entry 2552; 0 5160884 TABLE
 DATA file_item my_Database
 pg_restore: [archiver (db)] COPY failed for table file_item: ERROR:
 invalid input syntax for type
   double precision: 48.2000829
 CONTEXT:  COPY file_item, line 54, column fi_latitude:
 48.2000829
 WARNING: errors ignored on restore: 2


 Well at least it is consistent:) Postgres is deployed to a lot of Windows
 machines, so if this was a generic Windows problem I would expect more
 reports on this. There is something about this setup that is causing the
 problem and we are missing.

 In a previous post you made mention of a possible instance where this
 cropped up on a non-VM machine. Did you get a chance to track that down?

 Also the output from pg_restore shows only two errors on restore which I
 presume are the two COPY errors with the input syntax. So are there other
 tables in the database, with float values, that do restore correctly?

 Also in the errors above, in the first case COPY does not error until line
 718 and in the second case line 54. So are there float values in the data
 for those columns that are valid?

 Also what are the table definitions for dev_my_settings and file_item?


 Regards,
 Eric



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: Fwd: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-09 Thread Eric Svenson
Hi Adrian,

here are the results of today:

 How where the Postgres instances installed?
   From a package?

Yes. It is Version 9.2 for Windows, comment of the package is
The PostgreSQL RDBMS, version 9.2, packaged by EnterpriseDB


  Compiled and if so with what compiler and what settings?
No.


 Use --inserts with pg_dump to get INSERT statements instead of a COPY and
then feed to psql.
 This will slow the process down, so I would try with a small sample set.

Much slower, but still the same error:

(part of the output)

INSERT 50415934 1
INSERT 50415935 1
INSERT 50415936 1
INSERT 50415937 1
INSERT 50415938 1
INSERT 50415939 1
ERROR:  invalid input syntax for type double precision:
0.10001

 Do pg_dump -Fc and then use pg_restore.

pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2558; 0 5161040 TABLE DATA
dev_my_settings my_Database
pg_restore: [archiver (db)] COPY failed for table dev_my_settings:
ERROR:  invalid input syntax
for type double precision: 0.10001
CONTEXT:  COPY dev_meas_settings, line 718, column dms_dble_value:
0.10001
pg_restore: [archiver (db)] Error from TOC entry 2552; 0 5160884 TABLE DATA
file_item my_Database
pg_restore: [archiver (db)] COPY failed for table file_item: ERROR:
invalid input syntax for type
 double precision: 48.2000829
CONTEXT:  COPY file_item, line 54, column fi_latitude: 48.2000829
WARNING: errors ignored on restore: 2

Regards,
Eric


2014-12-05 16:22 GMT+01:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 12/05/2014 01:13 AM, Eric Svenson wrote:



 Hi Adrian,

  Is the above how the message was actually presented or has a
 partialtranslation taken place? Just asking because it would seem to
 indicate

 further confusion about the locale.

 This is an exact copy of the screen contents, no translation by me has
 taken place. Indeed strange, ERROR (english) KONTEXT (german, one
 line below) ??

 To add further confusion: I have a report that the error also appeared
 on a non-VM native PC with Windows 7 Enterprise. Unfortunately, I dont
 have the exact error message, but I try to get it ASAP.


 Well nothing came back to me on VMware and locales, but that does not seem
 to be the issue if the above is correct.

 So:

 How where the Postgres instances installed?
   From a package?
   Compiled and if so with what compiler and what settings?

 What happens if you?:

 Use --inserts with pg_dump to get INSERT statements instead of a COPY and
 then feed to psql.
 This will slow the process down, so I would try with a small sample set.

 Do pg_dump -Fc and then use pg_restore.

 The above are more a way of seeing if the issue is on a particular path or
 is generic, than a solution.



 regards,
 Eric Svenson



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Fwd: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-05 Thread Eric Svenson
Hi Adrian,

Is the above how the message was actually presented or has a partial
translation taken place? Just asking because it would seem to indicate
further confusion about the locale.

This is an exact copy of the screen contents, no translation by me has
taken place. Indeed strange, ERROR (english) KONTEXT (german, one line
below) ??

To add further confusion: I have a report that the error also appeared on a
non-VM native PC with Windows 7 Enterprise. Unfortunately, I dont have the
exact error message, but I try to get it ASAP.

regards,
Eric Svenson

2014-12-04 15:23 GMT+01:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 12/04/2014 02:01 AM, Eric Svenson wrote:

 So what if you load to the VM Postgres using the psql from the host?


 I tried that: On the host I started psql with
 psql -h 192.168.2.55 -U postgres -p 5432 my_Database  myFile.sql

 I get the same error

 ERROR: invalid input syntax for type double precision 0.2
 KONTEXT: COPY dev_my_settings, line xxx, column xxx: 0.2


 Is the above how the message was actually presented or has a partial
 translation taken place? Just asking because it would seem to indicate
 further confusion about the locale.


 ERROR: invalid input syntax for type double precision 15.776653623


 Hmm, I'm at a loss. It is not a strict Windows issue as you can restore on
 other Windows machines. The remaining suspect would be VMware. VMware and
 locales tickles a memory, one that I cannot drag up at the moment. One of
 those things where the more you think of it the further it goes away. We
 will see if working on other stuff causes it to sneak up on me:)



 Regards,
 Eric Svenson



 --
 Adrian Klaver
 adrian.kla...@aklaver.com



Re: Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-04 Thread Eric Svenson
 So what if you load to the VM Postgres using the psql from the host?

I tried that: On the host I started psql with
psql -h 192.168.2.55 -U postgres -p 5432 my_Database  myFile.sql

I get the same error

ERROR: invalid input syntax for type double precision 0.2
KONTEXT: COPY dev_my_settings, line xxx, column xxx: 0.2

ERROR: invalid input syntax for type double precision 15.776653623

Regards,
Eric Svenson

2014-12-04 11:00 GMT+01:00 Eric Svenson esvenso...@googlemail.com:

  So what if you load to the VM Postgres using the psql from the host?

 I tried that: On the host I started psql with
 psql -h 192.168.2.55 -U postgres -p 5432 my_Database  myFile.sql

 I get the same error

 ERROR: invalid input syntax for type double precision 0.2
 KONTEXT: COPY dev_my_settings, line xxx, column xxx: 0.2

 ERROR: invalid input syntax for type double precision 15.776653623

 Regards,
 Eric Svenson

 2014-12-03 16:24 GMT+01:00 Adrian Klaver adrian.kla...@aklaver.com:

 On 12/03/2014 12:23 AM, Eric Svenson wrote:


 Hi Adrian,

 thank you for your support. Here are the informations you requested


 I have already done that and found something strange:

 On the PC where the backup was done with pg_dump, all locale
 settings of
 Postgres were English/United States. (LC_COLLATE, LC_CTYPE,
 LC_MONETARY,
 LC_NUMERIC in postgresql.conf)


   OS and OS version?

 Windows Server 2012R2

   Postgres version?

 Version 9.2.1 (same version used on all machines)

   What was the pg_dump command used?

 pg_dump -p 6789 -EUTF8 -f myFile.sql my_Database


How was it loaded via psql?

 psql -p 6789 -U postgres my_Database  myFile.sql

   Was the psql on the same machine and from the same version of
 Postgres?

 Same version, but on different machines.


 On the second PC (Virtual Machine) I had the SAME settings in
 postgresql.conf (German_Germany)


   OS and OS version?

 Windows 7 Ultimate SP 1

   Postgres version?

 Version 9.2.1 (same version used on all machines)

   How was it loaded via psql?

 psql -p 6789 -U postgres my_Database  myFile.sql

   Did you use the psql on the VM or did you use the psql on the host?

 psql on the VM


 So what if you load to the VM Postgres using the psql from the host?


 Again, thank you for the support.


 As Tom said LC_NUMERIC is set to C in main.c:

 /*
  * We keep these set to C always, except transiently in
 pg_locale.c; see
  * that file for explanations.
  */

 The transiently part intrigued me so I went to pg_locale.c and there was
 a big:

 * !!! NOW HEAR THIS !!!

 with regards to resetting locales. Honestly the explanation is beyond my
 expertise. I offer it only as a starting point for those that can
 understand it.



 Regards,
 Eric Svenson



 --
 Adrian Klaver
 adrian.kla...@aklaver.com





Fwd: [GENERAL] Problem with pg_dump and decimal mark

2014-12-03 Thread Eric Svenson
Hi Adrian,

thank you for your support. Here are the informations you requested


 I have already done that and found something strange:

 On the PC where the backup was done with pg_dump, all locale settings of
 Postgres were English/United States. (LC_COLLATE, LC_CTYPE, LC_MONETARY,
 LC_NUMERIC in postgresql.conf)


 OS and OS version?

Windows Server 2012R2

 Postgres version?

Version 9.2.1 (same version used on all machines)

 What was the pg_dump command used?

pg_dump -p 6789 -EUTF8 -f myFile.sql my_Database


 On the first PC on which I tried to load the backup file with psql, all
 locale settings if Postgres were  German_Germany. Everything is ok, the
 SQL file with '.' as decimal point was accepted without a problem


 OS and OS version?

Windows 7 Enterprise SP 1

 Postgres version?

Version 9.2.1 (same version used on all machines)

 How was it loaded via psql?

psql -p 6789 -U postgres my_Database  myFile.sql

 Was the psql on the same machine and from the same version of Postgres?

Same version, but on different machines.


 On the second PC (Virtual Machine) I had the SAME settings in
 postgresql.conf (German_Germany)


 OS and OS version?

Windows 7 Ultimate SP 1

 Postgres version?

Version 9.2.1 (same version used on all machines)

 How was it loaded via psql?

psql -p 6789 -U postgres my_Database  myFile.sql

 Did you use the psql on the VM or did you use the psql on the host?

psql on the VM

 Was the Postgres/psql on the host the same as the VM?

Same Version (9.2.1)

 What are you using for virtualization?

VM Ware Player 6.0.2

 What is host OS?

Windows 7 Enterprise SP 1 (see above)

Again, thank you for the support.

Regards,
Eric Svenson


Re: [GENERAL] Problem with pg_dump and decimal mark

2014-12-02 Thread Eric Svenson
Seems you have a locale mismatch issue. The dump is coming from a locale
 where a '.' is the decimal mark and is being restored to a locale where
 ',' is the mark. Look at what the locales are the machines that work and
 the one that does not.



I have already done that and found something strange:

On the PC where the backup was done with pg_dump, all locale settings of
Postgres were English/United States. (LC_COLLATE, LC_CTYPE, LC_MONETARY,
LC_NUMERIC in postgresql.conf)

On the first PC on which I tried to load the backup file with psql, all
locale settings if Postgres were  German_Germany. Everything is ok, the SQL
file with '.' as decimal point was accepted without a problem

On the second PC (Virtual Machine) I had the SAME settings in
postgresql.conf (German_Germany)

- no success

I tried to change all the settings to English/United States, restart
postgres

- still no success

Changed all Windows settings to English / United States

- still no success.

So what I am searching for (at the moment without success) is the 'switch'
which decides what decimal seperator to expect by psql.


 That's what it sounds like all right, but how could that be?  The behavior
 of float8in/float8out is not supposed to be locale-dependent.

 float8in does depend on strtod(), whose behavior is locale-dependent
 according to POSIX, but we keep LC_NUMERIC set to C to force it to
 only believe that . is decimal point.


 Not sure if this makes a difference but if I am reading the original post
 correctly the OP was trying a plain text restore via psql.


This is correct.


regards and thanks for your support,
Eric Svenson


[GENERAL] Problem with pg_dump and decimal mark

2014-11-28 Thread Eric Svenson
Hello,

I have done a backup of a postgres database on a virtual machine (Windows
8.1) using pg_dump.

On another (non-virtual) machine the restore (with psql) worked without
problems.

On the third virtual machine, however, the restore fails.

(ERROR: invalid input syntax for type double precision: 0.1001)

When I change the value in the sql file manually to 0,1001 the and
try again, the restore resumes until the next double value.

How is this possible? Does psql really expect comma-seperated decimal
values in the sql file? How can I change this behaviour?

Regards,
Eric Svenson


Re: [GENERAL] postgresql for small business

2014-11-19 Thread Eric Pierce


On 11/19/14, 6:54 AM, Ernesto Quiñones ernes...@gmail.com wrote:

Hi friends

I am looking for a postgresql solution in an appliance for small
business

know somebody who is selling this kind of solutions?

thanks

--

If you are OK with a cloud based solution, Amazon RDS might be a decent
fit for you.  We have a couple of small DB deployed there and it works
well.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PANIC: could not create file pg_xlog/xlogtemp.7884: No space left on device

2014-09-30 Thread Eric Veldhuyzen
Roopeshakumar Narayansa Shalgar (rshalgar) wrote:
 Hi,
 
  
 
 I am using version 9.3.1 and see the “no space device error” even though there
 is enough space (99% free) on my disk.

Just to be sure, check the output of both 'df -h' (for disk blocks) and 'df -hi'
(for inodes). You might have ran out of inodes...

Eric



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] can't restore database created with pg_dump

2014-09-21 Thread Eric Smith
Thank you for the help!

Regards,
Eric


On Sep 20, 2014, at 6:55 AM, Adrian Klaver adrian.kla...@aklaver.com wrote:

 On 09/19/2014 07:51 PM, Eric Smith wrote:
 All,
 
 I created a backup using pg_dump, postgres 8.3.  I'm trying to restore that 
 into a new postgres 9.3.5 installation.  I just created the default text 
 format output with pg_dump, and am trying to restore using psql  dumpfile.
 
 First, you should use the pg_dump command from 9.3.5 to dump the Postgres 8.3 
 database, it will 'know' about new features. This leads to the comments below.
 
 
 I have images saved as bytea, and I get the following error when trying to 
 restore:
 
 ERROR: invalid input syntax for type bytea.
 CONTEXT:  COPY images, line 8, column imageData:  
 MM\000*\003':\242\200?\300\0208$\026\015\007\204BaP\270d6\035\017\210Da\317\307\350\030\000\001\177\...
 
 
 On some of my installations the restore process works, and on some it fails. 
  All installations are moving from 8.3 to 9.3.5.  Any ideas?
 
 The default output format for bytea changed in 9.0:
 
 http://www.postgresql.org/docs/9.3/interactive/datatype-binary.html
 
 So did the handling of escape strings:
 
 http://www.postgresql.org/docs/9.3/interactive/runtime-config-compatible.html#RUNTIME-CONFIG-COMPATIBLE-VERSION
 
 standard_conforming_strings (boolean)
 
This controls whether ordinary string literals ('...') treat backslashes 
 literally, as specified in the SQL standard. Beginning in PostgreSQL 9.1, the 
 default is on (prior releases defaulted to off). Applications can check this 
 parameter to determine how string literals will be processed. The presence of 
 this parameter can also be taken as an indication that the escape string 
 syntax (E'...') is supported. Escape string syntax (Section 4.1.2.2) should 
 be used if an application desires backslashes to be treated as escape 
 characters.
 
 
 My guess is that the installations differ on the escape string handling. 
 Again, I would think the best way to handle this is to use the 9.3 version of 
 pg_dump.
 
 
 Thanks,
 Eric
 
 
 
 
 
 -- 
 Adrian Klaver
 adrian.kla...@aklaver.com
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] can't restore database created with pg_dump

2014-09-20 Thread Eric Smith
All,

I created a backup using pg_dump, postgres 8.3.  I'm trying to restore that 
into a new postgres 9.3.5 installation.  I just created the default text format 
output with pg_dump, and am trying to restore using psql  dumpfile.

I have images saved as bytea, and I get the following error when trying to 
restore:

ERROR: invalid input syntax for type bytea.
CONTEXT:  COPY images, line 8, column imageData:  
MM\000*\003':\242\200?\300\0208$\026\015\007\204BaP\270d6\035\017\210Da\317\307\350\030\000\001\177\...


On some of my installations the restore process works, and on some it fails.  
All installations are moving from 8.3 to 9.3.5.  Any ideas?

Thanks,
Eric



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GiST index question

2014-09-04 Thread Eric Fleming
Thank you both, I will look into alternative data types. I don’t think ltree 
will work for my purposes but I am going to try out some others that might; 
like cube.
—
Eric Fleming

On Thu, Sep 4, 2014 at 3:42 AM, Giuseppe Broccolo
giuseppe.brocc...@2ndquadrant.it wrote:

 Hi Eric,
 As Michael said, path data type does not support for gist operators.
 Anyway, you could redefine data type using 'ltree' instead of 'path'. Take
 a look on the following link:
 http://www.postgresql.org/docs/9.1/static/ltree.html
 Try to understand if this could be fine for you.
 Cheers,
 Giuseppe.
 2014-09-04 6:31 GMT+02:00 Michael Paquier michael.paqu...@gmail.com:
 On Thu, Sep 4, 2014 at 8:35 AM, Eric Fleming eflem...@gmail.com wrote:
  I have a table that I have defined as:
 
  CREATE TABLE test (
  id SERIAL PRIMARY KEY,
  first_path path NOT NULL,
  second_path path NOT NULL
  );
 
  I am attempting to create a GiST index on the two path columns using:
 
  CREATE INDEX  idx_test_first_path ON test USING gist(first_path);
  CREATE INDEX  idx_test_second_path ON test USING gist(second_path);
 
  I am getting this error:
 
  Error : ERROR:  data type path has no default operator class for access
  method gist
 
  Is it that I am not allowed to create an index on the path data type or
 do I
  need to enable an extension for this to work? Thanks in advance for your
  help.
 In-core contrib modules (and core) do not have yet support for gist
 operator classes for the datatype path as far as I recall.
 Regards,
 --
 Michael


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

 -- 
 Giuseppe Broccolo - 2ndQuadrant Italy
 PostgreSQL Training, Services and Support
 giuseppe.brocc...@2ndquadrant.it | www.2ndQuadrant.it

[GENERAL] GiST index question

2014-09-03 Thread Eric Fleming
I have a table that I have defined as:


CREATE TABLE test (
id SERIAL PRIMARY KEY,
first_path path NOT NULL,
second_path path NOT NULL
);


I am attempting to create a GiST index on the two “path” columns using:


CREATE INDEX  idx_test_first_path ON test USING gist(first_path);
CREATE INDEX  idx_test_second_path ON test USING gist(second_path);


I am getting this error:


Error : ERROR:  data type path has no default operator class for access method 
gist


Is it that I am not allowed to create an index on the path data type or do I 
need to enable an extension for this to work? Thanks in advance for your help.
—
Eric Fleming

[GENERAL] Memory leak with CREATE TEMP TABLE ON COMMIT DROP?

2014-06-12 Thread Eric Ridge
# select version();
  version   
   
---
 PostgreSQL 9.3.4 on x86_64-apple-darwin13.2.0, compiled by Apple LLVM version 
5.1 (clang-503.0.40) (based on LLVM 3.4svn), 64-bit
(1 row)

As best I can guess, Postgres has some kind of memory leak around (at least) 
temporary tables flagged to drop on commit.  It's fairly easy to reproduce:

Terminal A
--
$ createdb leak
$ for i in $(seq 1 100) ; do echo begin; create temp table foo() on commit 
drop; commit;; done | psql leak  /dev/null

Terminal B
--
$ while(true); do ps auwx | grep $PID_OF_POSTGRES_PROCESS_FROM_TERMINAL_A; 
sleep 1 ; done

And watch the RSS size continue to climb, fairly quickly.  This happens on both 
OS X and Linux (both x86_64).

We ran into this thanks to an update trigger that created a temporary table 
with on commit drop where we were literally updating millions of rows as atomic 
transactions, across about 100 concurrent connections, firing the trigger for 
each atomic update.  The server quickly ran out of memory.

It took some time to find what appears to be the actual problem, but I think 
this is it.  We've since rewritten the trigger to avoid using a temporary table 
(probably a good thing anyways) and all is well, but I was very shocked to see 
Postgres behaving badly here.

Any thoughts?  And thanks for your time!

eric




PROPRIETARY AND COMPANY CONFIDENTIAL COMMUNICATIONS
The information contained in this communication is intended only for
the use of the addressee. Any other use is strictly prohibited.
Please notify the sender if you have received this message in error.
This communication is protected by applicable legal privileges and is
company confidential.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] problems with permissions

2014-04-28 Thread Eric Smith
All,

Having a problem dumping a database using pg_dump. This is Mac OS 10.9.2 and 
postgres 8.3.  I'm using md5 for authentication, and have the appropriate 
.pgpass file with u=rw permissions in the user's home directory.  I can access 
my database, called radiovision, using psql -U radiovision.  I think that means 
the authentication is working properly (i.e. postgres does not ask for a 
password for radiovision or give me any authentication errors).  However, when 
I try to create a backup using pg_dump -C radiovision, I am prompted for a 
password to which I respond with radiovision's password.  I then get the error:

connection to database radiovision failed:  FATAL:  password authentication 
failed for user DrRudner.


Why is postgres asking for a password for DrRudner?  This is the mac username 
of the user currently logged in, and also the user that installed postgres 
initially.

Any help is much appreciated!!

Regards,
Eric



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need help with upsert

2013-12-04 Thread Eric Lamer
Hi,

 

   I need some help with upsert.

 

   Some info on what I want to do:

 

   Each day I create a table for my firewall logs.   I have one entry for
one occurrence of all the field except sum, which is the number of
occurrence I have of each log that match all the field. My table has the
following field:
firewall,action,src_zone,src_ip,dst_zone,dst_ip,proto,port,hex1,hex2,sum

   

  Each day I want to copy the last 7 days into one table so I have one table
with the last 7 days of logs.

 

  So I want to copy the data from 7 tables into 1.  If the row does not
exist I just insert and if the row already exist I just update the sum
(existing sum + new sum).

  

  Public.test is the table I use for the last 7 days logs.

  daily.daily_20131202 is table for 1 day.

  I will run this command 7 times with different daily table.

 

WITH upsert as 

(update public.test T set
firewall=S.firewall,action=S.action,src_zone=S.src_zone,src_ip=S.src_ip,dst_
zone=S.dst_zone,dst_ip=S.dst_ip,proto=S.proto,port=S.port,hex1=S.hex1,hex2=S
.hex2,sum=T.sum+S.sum from daily.daily_20131202 S where
(T.firewall=S.firewall and T.action=S.action and T.src_zone=S.src_zone and
T.src_ip=S.src_ip and T.dst_zone=S.dst_zone and T.dst_ip=S.dst_ip and
T.proto=S.proto and T.port=S.port and T.hex1=S.hex1 and T.hex2=S.hex2)
RETURNING * ) 

insert into public.test select * from daily.daily_20131202 a WHERE NOT
EXISTS (SELECT * FROM upsert b WHERE a.firewall=b.firewall and
a.action=b.action and a.src_zone=b.src_zone and a.src_ip=b.src_ip and
a.dst_zone=b.dst_zone and a.dst_ip=b.dst_ip and a.proto=b.proto and
a.port=b.port and a.hex1=b.hex1 and a.hex2=b.hex2);

 

When I run the command I get an  error 

ERROR:  column reference firewall is ambiguous

LINE 3: ... a WHERE NOT EXISTS (SELECT * FROM upsert b WHERE b.firewall...

 

Any idea what I am doing wrong?

 

Also, is there an easier way to do that?

 

Thanks for the help.



smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Need help with upsert

2013-12-04 Thread Eric Lamer
LINE 3: ... a WHERE NOT EXISTS (SELECT * FROM upsert b WHERE b.firewall...

 

As you can see it is b.firewall where b is alias to table upsert.  That's
why I don't understand the error.

 

 

 

From: Serge Fonville [mailto:serge.fonvi...@gmail.com] 
Sent: December 4, 2013 12:53 PM
To: Eric Lamer
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help with upsert

 

Hi,

 

 ERROR:  column reference firewall is ambiguous

 LINE 3: ... a WHERE NOT EXISTS (SELECT * FROM upsert b WHERE b.firewall...

set firewal = misses a table

HTH

 

Kind regards/met vriendelijke groet,

 

Serge Fonville

 

http://www.sergefonville.nl

 

2013/12/4 Eric Lamer e...@phoenixsecure.com mailto:e...@phoenixsecure.com


Hi,

 

   I need some help with upsert.

 

   Some info on what I want to do:

 

   Each day I create a table for my firewall logs.   I have one entry for
one occurrence of all the field except sum, which is the number of
occurrence I have of each log that match all the field. My table has the
following field:
firewall,action,src_zone,src_ip,dst_zone,dst_ip,proto,port,hex1,hex2,sum

   

  Each day I want to copy the last 7 days into one table so I have one table
with the last 7 days of logs.

 

  So I want to copy the data from 7 tables into 1.  If the row does not
exist I just insert and if the row already exist I just update the sum
(existing sum + new sum).

  

  Public.test is the table I use for the last 7 days logs.

  daily.daily_20131202 is table for 1 day.

  I will run this command 7 times with different daily table.

 

WITH upsert as 

(update public.test T set
firewall=S.firewall,action=S.action,src_zone=S.src_zone,src_ip=S.src_ip,dst_
zone=S.dst_zone,dst_ip=S.dst_ip,proto=S.proto,port=S.port,hex1=S.hex1,hex2=S
.hex2,sum=T.sum+S.sum from daily.daily_20131202 S where
(T.firewall=S.firewall and T.action=S.action and T.src_zone=S.src_zone and
T.src_ip=S.src_ip and T.dst_zone=S.dst_zone and T.dst_ip=S.dst_ip and
T.proto=S.proto and T.port=S.port and T.hex1=S.hex1 and T.hex2=S.hex2)
RETURNING * ) 

insert into public.test select * from daily.daily_20131202 a WHERE NOT
EXISTS (SELECT * FROM upsert b WHERE a.firewall=b.firewall and
a.action=b.action and a.src_zone=b.src_zone and a.src_ip=b.src_ip and
a.dst_zone=b.dst_zone and a.dst_ip=b.dst_ip and a.proto=b.proto and
a.port=b.port and a.hex1=b.hex1 and a.hex2=b.hex2);

 

When I run the command I get an  error 

ERROR:  column reference firewall is ambiguous

LINE 3: ... a WHERE NOT EXISTS (SELECT * FROM upsert b WHERE b.firewall...

 

Any idea what I am doing wrong?

 

Also, is there an easier way to do that?

 

Thanks for the help.

 



smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] Need help with upsert

2013-12-04 Thread Eric Lamer
Hi,

 

   I need some help with upsert.

 

   Some info on what I want to do:

 

   Each day I create a table for my firewall logs.   I have one entry for
one occurrence of all the field except sum, which is the number of
occurrence I have of each log that match all the field. My table has the
following field:
firewall,action,src_zone,src_ip,dst_zone,dst_ip,proto,port,hex1,hex2,sum

   

  Each day I want to copy the last 7 days into one table so I have one table
with the last 7 days of logs.

 

  So I want to copy the data from 7 tables into 1.  If the row does not
exist I just insert and if the row already exist I just update the sum
(existing sum + new sum).

  

  Public.test is the table I use for the last 7 days logs.

  daily.daily_20131202 is table for 1 day.

  I will run this command 7 times with different daily table.

 

WITH upsert as 

(update public.test T set
firewall=S.firewall,action=S.action,src_zone=S.src_zone,src_ip=S.src_ip,dst_
zone=S.dst_zone,dst_ip=S.dst_ip,proto=S.proto,port=S.port,hex1=S.hex1,hex2=S
.hex2,sum=T.sum+S.sum from daily.daily_20131202 S where
(T.firewall=S.firewall and T.action=S.action and T.src_zone=S.src_zone and
T.src_ip=S.src_ip and T.dst_zone=S.dst_zone and T.dst_ip=S.dst_ip and
T.proto=S.proto and T.port=S.port and T.hex1=S.hex1 and T.hex2=S.hex2)
RETURNING * ) 

insert into public.test select * from daily.daily_20131202 a WHERE NOT
EXISTS (SELECT * FROM upsert b WHERE a.firewall=b.firewall and
a.action=b.action and a.src_zone=b.src_zone and a.src_ip=b.src_ip and
a.dst_zone=b.dst_zone and a.dst_ip=b.dst_ip and a.proto=b.proto and
a.port=b.port and a.hex1=b.hex1 and a.hex2=b.hex2);

 

When I run the command I get an  error 

ERROR:  column reference firewall is ambiguous

LINE 3: ... a WHERE NOT EXISTS (SELECT * FROM upsert b WHERE b.firewall...

 

Any idea what I am doing wrong?

 

Also, is there an easier way to do that?

 

Thanks for the help.



smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] v9.3.0: bug with pgdump -s?

2013-09-15 Thread Eric B. Ridge

(Using the new PostgreSQL v9.3.0)

I did a schema-only dump of the 'pg_catalog' schema in the hopes to 
study how the built-in OPERATOR CLASSes are defined.


Doing so output a few warnings:

$ [pg930] pg_dump -s -n pg_catalog template1  /tmp/pg_catalog.sql
pg_dump: WARNING: typtype of data type any appears to be invalid
pg_dump: WARNING: typtype of data type anyarray appears to be invalid
pg_dump: WARNING: typtype of data type anyelement appears to be invalid
pg_dump: WARNING: typtype of data type anyenum appears to be invalid
pg_dump: WARNING: typtype of data type anynonarray appears to be invalid
pg_dump: WARNING: typtype of data type anyrange appears to be invalid
pg_dump: WARNING: typtype of data type cstring appears to be invalid
pg_dump: WARNING: typtype of data type event_trigger appears to be invalid
pg_dump: WARNING: typtype of data type fdw_handler appears to be invalid
pg_dump: WARNING: typtype of data type internal appears to be invalid
pg_dump: WARNING: typtype of data type language_handler appears to be 
invalid

pg_dump: WARNING: typtype of data type opaque appears to be invalid
pg_dump: WARNING: typtype of data type record appears to be invalid
pg_dump: WARNING: typtype of data type trigger appears to be invalid
pg_dump: WARNING: typtype of data type void appears to be invalid

Additionally, the operator classes seem to be incomplete in pg_dump's 
output:


CREATE OPERATOR CLASS box_ops
DEFAULT FOR TYPE box USING gist AS
;

...

CREATE OPERATOR CLASS text_ops
DEFAULT FOR TYPE text USING btree AS
;

I'd expect them to be fully spec'd with OPERATOR, FUNCTION, and STORAGE 
arguments.


Am I expecting too much against 'pg_catalog' or is something bugged?

eric






PROPRIETARY AND COMPANY CONFIDENTIAL COMMUNICATIONS
The information contained in this communication is intended only for 
the use of the addressee. Any other use is strictly prohibited. 
Please notify the sender if you have received this message in error. 
This communication is protected by applicable legal privileges and is

company confidential.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] v9.3.0: bug with pgdump -s?

2013-09-15 Thread Eric B. Ridge

On 09/15/2013 02:11 PM, Andres Freund wrote:
What did you actually try to find out? Greetings, Andres Freund 


I was just trying to mentally sync up 
http://www.postgresql.org/docs/9.3/static/xindex.html with (more) 
concrete examples.  Investigating pg_catalog seemed like the obvious 
choice.  Too bad.


eric



PROPRIETARY AND COMPANY CONFIDENTIAL COMMUNICATIONS
The information contained in this communication is intended only for 
the use of the addressee. Any other use is strictly prohibited. 
Please notify the sender if you have received this message in error. 
This communication is protected by applicable legal privileges and is

company confidential.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need help with Inet type

2013-09-10 Thread Eric Lamer
Hi,

   I would like to know if there is a way to do what I need with 
Postgress.

   I have a table with logs and a table with a list of subnets.

   Right now I do the following query in my table:

   Select * From logs Where src_ip  '10.0.0.0/24' OR src_ip  
'10.1.0.0/24' OR src_ip  '172.168.1.0/28';

   I would like to simplify that query and use a table instead as the 
source of the subnet I want to check.

   I have a table that contain a list of subnet and zone name:

   ZONE_a   10.0.0.0/24
   ZONE_a10.1.0.0/24
   ZONE_a172.16.1.0/28
   ZONE_b10.2.0.0/24
   ZONE_b10.3.0.0/24

  I can do something like:

   Select * From logs Where src_ip IN (Select ip from ip_table where zone 
= 'ZONE_a');

   Of course that does not work since it does not work with Inet type and 
I cannot use  because I have more than 1 row return from the second 
select.

   Is there a way to achive that?

Thanks.


___
Eric Lamer
IT Security Specialist
INTACT Financial Corporation
(450) 778-9580 ext. 3744

Re: [GENERAL] create database from template requires the source database to be unused

2012-06-28 Thread Haszlakiewicz, Eric
 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 It'd be really interesting to relax that limitation to ... other
 non-read-only transaction ...  and have a database that's being cloned
 block or reject
 DML, UPDATE, etc. There are some issues with that though:
 
 (a) Transactions are read/write by default. Most apps don't bother to
 SET TRANSACTION READ ONLY or BEGIN READ ONLY TRANSACTION . Most
 non-read-only transactions will make no changes, but the database can't
 know that until they complete.
 
 (b) AFAIK even truly read-only transactions can set hint bits and
 certain other system level database metadata.
 
 (c) Because of (a) it'd be necessary to block your CREATE DATABASE ...
 TEMPLATE ... until all transactions finished and sessions were idle, or
 to abort all transactions and roll them back.

I've read that postgres uses MVCC for transactions, and that it creates 
snapshots of the database for each transaction.  Couldn't the create
database command just use that snapshot?

eric

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] retrieving function raise messages in ecpg embedded sql code

2012-06-25 Thread Haszlakiewicz, Eric
 -Original Message-
 From: Jeff Davis [mailto:pg...@j-davis.com]
 
 On Mon, 2012-06-18 at 21:35 +, Haszlakiewicz, Eric wrote:
  I'm trying to get some additional information back from a trigger to
  my embedded SQL program, to essentially emulate Informix's way of
 generating serial values.
  I can get the serial to be generated, but I'm trying to figure out
 how
  to get the generated value back to my program with minimal changes to
 the SQL.
 
 Have you already looked at INSERT...RETURNING?
 
 http://www.postgresql.org/docs/9.2/static/sql-insert.html

Yes, I've started modifying things to use that, but I'd like to avoid doing
that so I can share the same code between postgresql and informix builds.

  I can't figure out how to retrieve the message raised by the trigger.
  I know it's available in some cases, because I see the message when I
  insert a row through psql, but even things like this:
 printf(%s\n, PQerrorMessage(ECPGget_PGconn(mydb)));
 
  return nothing useful.  Is there a way to get this information?
 
 Yes, these messages are delivered via notice processing (not to be
 confused with LISTEN/NOTIFY):
 
 http://www.postgresql.org/docs/9.2/static/libpq-notice-processing.html

That looks like exactly what I'm looking for, I'll try it out.  Thanks!

eric

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] retrieving function raise messages in ecpg embedded sql code

2012-06-18 Thread Haszlakiewicz, Eric
I'm trying to get some additional information back from a trigger to my 
embedded SQL
program, to essentially emulate Informix's way of generating serial values.
I can get the serial to be generated, but I'm trying to figure out how to get 
the
generated value back to my program with minimal changes to the SQL.

I have a trigger that looks a bit like this:

create table mytable (mycol integer, mycol2 integer);

create or replace function functionfoo() returns trigger as $QUOTED$
BEGIN
new.mycol = nextval(TG_TABLE_NAME || '_mycol_seq');
raise INFO using MESSAGE = 'A Message';
return new;
END;
$QUOTED$ LANGUAGE 'plpgsql';

create trigger mytable_insert_trig before insert on mytable for each row when 
(new.mycol = 0) execute procedure functionfoo();


My ecpg program looks a bit like this:

exec sql begin declare section;
long mycol1;
long mycol2;
const char *mydb;
exec sql end declare section;
mycol1 = 0;
mycol2 = 1;
mydb = mydb;
exec sql connect to :mydb;
exec sql prepare row_insert from insert into mytable values (?, ?);
EXEC SQL EXECUTE row_insert using :mycol1, mycol2;


I can't figure out how to retrieve the message raised by the trigger.  I know 
it's 
available in some cases, because I see the message when I insert a row through 
psql,
but even things like this:
   printf(%s\n, PQerrorMessage(ECPGget_PGconn(mydb)));

return nothing useful.  Is there a way to get this information?

Thanks,
eric


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can postgres be configure for GSS/Kerberos authentication without a keyfile?

2012-04-04 Thread Eric . Kamradt
Can postgres be configure for GSS/Kerberos authentication without a keyfile?

I have Kerberos working
I compiled postgres with -with-gssapi
I've setup the pg_hba.conf with gss and my IP address
I'm not sure what to put in the postgres.conf

I can do a kinit 
buckwheat.johnson@SOME_EXAMPLE.COMmailto:buckwheat.johnson@SOME_EXAMPLE.COM - 
and it correctly validates against an AD server
My klist is as follows
---
[root@mbr01 postgresql-9.1.3]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: 
buckwheat.johnson@SOME_EXAMPLE.COMmailto:buckwheat.johnson@SOME_EXAMPLE.COM

Valid starting ExpiresService principal
04/04/12 08:41:28  04/04/12 15:21:28  
krbtgt/SOME_EXAMPLE.COM@SOME_EXAMPLE.COMmailto:krbtgt/SOME_EXAMPLE.COM@SOME_EXAMPLE.COM

Kerberos 4 ticket cache: /tmp/tkt0
klist: You have no tickets cached
-

I'm getting a error on the Linus CentOS server side
FATAL:  GSSAPI authentication failed for user buckwheat.johnson

The error I'm getting on my windoz client is
psql: SSPI continuation error: The specified target is unknown or unreachable
(80090303)

Do I have to set the krb_server_keyfile in the postgres.conf?  If not do I 
have to set any krb5 postgres.conf parameters?

--
Thank you for your time
Eric (alias buckwheat)



CONFIDENTIALITY: This email (including any attachments) may contain 
confidential, proprietary
and privileged information, and unauthorized disclosure or use is prohibited. 
If you received this
email in error, please notify the sender and delete this email from your system.
Thank you


[GENERAL] Can postgres be configure for GSS/Kerberos authentication without a keyfile?

2012-04-04 Thread Eric . Kamradt
Can postgres be configure for GSS/Kerberos authentication without a keyfile?

I have Kerberos working
I compiled postgres with -with-gssapi
I've setup the pg_hba.conf with gss and my IP address
I'm not sure what to put in the postgres.conf

I can do a kinit 
buckwheat.johnson@SOME_EXAMPLE.COMmailto:buckwheat.johnson@SOME_EXAMPLE.COM - 
and it correctly validates against an AD server
My klist is as follows
---
[root@mbr01 postgresql-9.1.3]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: buckwheat.johnson@SOME_EXAMPLE.COM

Valid starting ExpiresService principal
04/04/12 08:41:28  04/04/12 15:21:28  krbtgt/SOME_EXAMPLE.COM@SOME_EXAMPLE.COM

Kerberos 4 ticket cache: /tmp/tkt0
klist: You have no tickets cached
-

I'm getting a error on the Linus CentOS server side
FATAL:  GSSAPI authentication failed for user buckwheat.johnson

The error I'm getting on my windoz client is
psql: SSPI continuation error: The specified target is unknown or unreachable
(80090303)

Do I have to set the krb_server_keyfile in the postgres.conf?  If not do I 
have to set any krb5 postgres.conf parameters?

--
Eric (alias buckwheat)



CONFIDENTIALITY: This email (including any attachments) may contain 
confidential, proprietary
and privileged information, and unauthorized disclosure or use is prohibited. 
If you received this
email in error, please notify the sender and delete this email from your system.
Thank you


[GENERAL] [RFE] auto ORDER BY for SELECT

2012-01-23 Thread Douglas Eric

I'm not sure if this is the right list to discuss this, but, I have a 
suggestion:
ORDER BY clause, as defined in the SELECT documentation says:
If ORDER BY is not given, the rows are returned in whatever order the system 
finds fastest to produce
This order is usually not wanted, as it is not predictable. I believe many 
people would expect  the order of rowsreturned in this case, to be ordered as 
the primary key of the table, or the same order the rows were inserted.
I suggest to change this behavior. If one makes a SELECT statement without any 
ORDER BY, it would beclever to automatically sort by the first primary key 
found in the query, if any.The present behavior would still be used in case of 
queries without any primary key fields.
This would save a lot of repeated clauses ORDER BY table primary key that 
we have to add to every SELECT, even the most simple oneSELECT * FROM 
tableIf we actually want the order of the rows to make any sense.
 

Re: [GENERAL] adding a column takes FOREVER!

2011-11-01 Thread Eric Smith
Thank you for the response... to be perfectly honest, I don't know enough to 
know what I'm not telling you.  Below is the string I use to create the table, 
so you can see the contents.  I don't think I have foreign key references or 
triggers of any kind.  Any ideas? (this is 8.3 running on Mac OS 10.7)

Thanks again,
Eric

[tableString setString:@];
[tableString appendString:@create table images (\imageID\ 
varchar(11) primary key,];
[tableString appendString:@\patientID\ varchar(11) null,];
[tableString appendString:@\layoutID\ varchar(11) null,];
for( iTooth = 0; iTooth  33; iTooth++ ){
[tableString appendString:[NSString stringWithFormat:@tooth_%d 
varchar(1) default 0,,iTooth]];
}
[tableString appendString:@\pixelsWide\ varchar(4) null,];
[tableString appendString:@\pixelsHigh\ varchar(4) null,];
[tableString appendString:@\bytesPerPixel\ varchar(1) null,];
[tableString appendString:@\imageData\ bytea null,];
[tableString appendString:@\filePath\ varchar(256) null,];
[tableString appendString:@orientation char(1) null,];
[tableString appendString:@sequence char(2) null,];
[tableString appendString:@\genericInfo\ varchar(65536),];
[tableString appendString:@time time null,];
[tableString appendString:@\saveState\ varchar(1) default \'0\',];
[tableString appendString:@date date null)];

On Oct 21, 2011, at 8:24 PM, Tom Lane wrote:

 Eric Smith eric_h_sm...@mac.com writes:
 I'm adding a column in postgres 8.3 with the syntax:  alter table images add 
 column saveState varchar(1) default '0';  It takes a good solid 20 minutes 
 to add this column to a table with ~ 14,000 entries.  Why so long?  Is there 
 a way to speed that up?  The table has ~ 50 columns.
 
 As Craig explained, that does require updating every row ... but for
 only 14000 rows, it doesn't seem like it should take that long.
 A quick test with 8.3 on my oldest and slowest machine:
 
 regression=# create table foo as select generate_series(1,14000) as x;
 SELECT
 Time: 579.518 ms
 regression=# alter table foo add column saveState varchar(1) default '0';
 ALTER TABLE
 Time: 482.143 ms
 
 I'm thinking there is something you haven't told us about that creates a
 great deal of overhead for updates on this table.  Lots and lots o'
 indexes?  Lots and lots o' foreign key references?  Inefficient
 triggers?
 
 Or maybe it's just blocking behind somebody else's lock?
 
   regards, tom lane



Re: [GENERAL] adding a column takes FOREVER!

2011-11-01 Thread Eric Smith
Tom,

Well... there's a lot of data hiding in each of those rows... as much as 4MB in 
each.  I'll make allowances in my code so that adding a column without a 
default is a workable solution. 

Thank you,
Eric

On Nov 1, 2011, at 8:27 PM, Tom Lane wrote:

 Eric Smith eric_h_sm...@mac.com writes:
 Thank you for the response... to be perfectly honest, I don't know enough to 
 know what I'm not telling you.  Below is the string I use to create the 
 table, so you can see the contents.  I don't think I have foreign key 
 references or triggers of any kind.  Any ideas? (this is 8.3 running on Mac 
 OS 10.7)
 
 I'm wondering how much data is hiding behind this column:
 
  [tableString appendString:@\imageData\ bytea null,];
 
 Since you're installing a non-null column default value, the ALTER TABLE
 ADD COLUMN command has to rewrite the entire table.  If there are large
 images hiding in each of those only 14000 rows, there'd be a lot of
 data to copy over and so it could take awhile.
 
 (In contrast, ADD COLUMN without a default value is speedy because
 Postgres plays some tricks to avoid rewriting the table data.  That
 won't help you if you have to install non-null values in the new
 column, but it's good to know that there's a difference.)
 
   regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] adding a column takes FOREVER!

2011-10-21 Thread Eric Smith
All,

I'm adding a column in postgres 8.3 with the syntax:  alter table images add 
column saveState varchar(1) default '0';  It takes a good solid 20 minutes to 
add this column to a table with ~ 14,000 entries.  Why so long?  Is there a way 
to speed that up?  The table has ~ 50 columns.

Thanks,
Eric


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [BUGS] [GENERAL] One-click installer, Windows 7 32-bit, and icacls.exe

2011-10-20 Thread Eric McKeeth
On Wed, Oct 5, 2011 at 1:24 AM, Dave Page dp...@pgadmin.org wrote:



 On Wednesday, October 5, 2011, Thomas Kellerer spam_ea...@gmx.net wrote:
  Dave Page, 04.10.2011 21:46:

 
  We updated our build system to use BitRock 7 today (for unrelated
  reasons) which has new features for ACL management. We're going to
  investigate replacing cacls/icacls with those features tomorrow and
  will create some test builds ASAP.
 
  If you can provide the test builds publicly, I will be happy to test them
 and see if that behaves differently on my system.

 Thanks, we will.


 --
 Dave Page
 Blog: http://pgsnake.blogspot.com
 Twitter: @pgsnake

 EnterpriseDB UK: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company


As someone who recently spent a couple of days fighting with icacls, I
thought I might offer some insight here. What I discovered through trial and
error and much googling is that icacls has some non-intuitive behaviors
which are not at all obvious from just reading the documentation. For
example, it behaved entirely differently if you run it against a directory
instead of a file (which may contain wildcards). The command icacls.exe
C:\mydir\ options (when we targeted a directory) applied options to
every file in C:\mydir\ and all subdirectories. icacls.exe C:\mydir\*
options (when we targeted a file) applied options to all files in C:\,
but did not apply options to files in subdirectories unless the /t switch
was provided. This behavior is not directly mentioned in the documentation,
but can be inferred from the first 2 examples, if you look at them
carefully. Also, in the syntax description, the /t switch is shown for the
icacls.exe FileName syntax, but not for the icacls.exe Directory syntax.
I never would have noticed these if I weren't looking specifically for an
explanation of the observed behavior. As far as how to use icacls to set
permisions on a directory (as opposed to the files in a directory) without
recursing to all subdirectories, I never did succeed in finding that out.


[GENERAL] Global Variables?

2011-10-11 Thread Eric Radman
When writing unit tests it's sometimes useful to stub functions such as
the current date and time

-- define mock functions
CREATE OR REPLACE FUNCTION _now() RETURNS timestamp with time zone AS $$
  BEGIN RETURN '2011-10-10 10:00'; END;
$$ LANGUAGE plpgsql;

-- define tables accounts
CREATE TABLE accounts (username varchar, expiration timestamp);

-- populate with sample data
COPY accounts FROM '/home/eradman/sample_accounts.txt';

-- define view expired_accounts
CREATE OR REPLACE VIEW expired_accounts AS SELECT * FROM accounts WHERE 
expiration  _now();

-- test views
SELECT assert(0, (SELECT count(*) FROM expired_accounts)::integer);

Is it possible to declare a global variable that can be referenced from
the user-defined function _now()? I'm looking for a means of abstraction
that allows me to avoid issuing CREATE OR REPLACE FUNCTION ... before
each assert()

current_time := '2012-01-01'::timestamp
SELECT assert(5, (SELECT count(*) FROM expired_accounts)::integer);

-- 
Eric Radman  |  http://eradman.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Global Variables?

2011-10-11 Thread Eric Radman
On Tue, Oct 11, 2011 at 04:26:47PM +0200, Alban Hertroys wrote:
 On 11 October 2011 16:06, Eric Radman ericsh...@eradman.com wrote:
  When writing unit tests it's sometimes useful to stub functions such
  as the current date and time
 
 You could create a table for such constants and read your
 current-time from that table.
 
 Additionally, I would put such stub functions in a separate schema and
 create a test role with that schema as the top of their search_path.
 
 That way, you could even override system function implementations (and
 other definitions) and only have them apply to the role you're using
 for unit testing.

 CREATE ROLE unit_tester;
 CREATE SCHEMA unit_tests AUTHORIZATION unit_tester;
 SET search_path TO unit_tests, my_schema, public;
 
 CREATE TABLE unit_test_parameters (
current_time timestamp without time zone NOT NULL DEFAULT now()
 );

Excellent advice; this model works wonderfully. pg_catalog is normally
implicit, but you're right, system functions can be overridden by
setting the search path.


Eric Radman  |  http://eradman.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] OS X 10.7, psql, and tab completion?

2011-09-22 Thread Eric Ridge
Hi!

What's the incantation one needs to recite before compiling Postgres
8.4.x on OS X 10.7 such that psql's tab completion will work?

I love my Mac, but Apple really dorked up libedit/readline and I just
can't figure out what I'm supposed to do.

Any hints will be greatly appreciated!

eric

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] error when compiling a c function

2011-07-27 Thread Eric Ridge
On Wed, Jul 27, 2011 at 2:12 PM, Sebastian Jaenicke
sjaen...@cebitec.uni-bielefeld.de wrote:
 On Wed, Jul 27, 2011 at 09:34:20AM -0700, Ioana Danes wrote:

 #ifdef PG_MODULE_MAGIC

 #ifndef

Just to avoid confusion...  #ifdef *is* correct.  See:

http://www.postgresql.org/docs/current/static/xfunc-c.html

(I can't comment on the OP's actual problem)

eric

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's eating my space ?

2011-05-19 Thread Eric McKeeth
On Thu, May 19, 2011 at 1:05 AM, Andreas Kretschmer 
akretsch...@spamfence.net wrote:

 Georgi Ivanov georgi.r.iva...@gmail.com wrote:

  Hi,
  I wander what is taking up my space on disk ...
 
  btv=# SELECT pg_size_pretty(pg_database_size('btv_good'));
   pg_size_pretty
  
   10 GB
  (1 row)
 
 
  btv=# SELECT nspname || '.' || relname AS relation,
  pg_size_pretty(pg_total_relation_size(C.oid)) AS total_size
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
  AND C.relkind  'i'
  AND nspname !~ '^pg_toast'
ORDER BY pg_total_relation_size(C.oid) DESC
LIMIT 15;
 relation   | total_size
  --+
   users.users  | 703 MB
   btv.material | 557 MB
   btv_admin.material   | 269 MB
   btv_admin.block  | 24 MB
   btv.block| 20 MB
   btv_admin.block_list | 9136 kB
   btv.block_list   | 9112 kB
   multimedia.rel_image_collection2size | 2984 kB
   multimedia.rel_image_collection2tag  | 1024 kB
   btv_admin.block_common   | 976 kB
   multimedia.image_collection  | 936 kB
   btv.block_common | 832 kB
   users_admin.invalidate_notify| 752 kB
   btv_admin.tv_program | 656 kB
   btv.rel_material2tag | 592 kB
  (15 rows)
 
  The sum of biggest tables is not even close to the total db size .
  Some index going wild ?

 Your sum doesn't contains indexes and toast-tables.


 Andreas
 --
 Really, I'm not out to destroy Microsoft. That will just be a completely
 unintentional side effect.  (Linus Torvalds)
 If I was god, I would recompile penguin with --enable-fly.   (unknown)
 Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


Since he used pg_total_relation_size(), according to the manual (
http://www.postgresql.org/docs/current/interactive/functions-admin.html)
indexes and toast should be included in the numbers reported for the tables.
Unfortunately, I don't have any insight as to why pg_database_size() is
returning a number roughly 5x larger than the sum of
pg_total_relation_size() here.

-Eric


Re: [GENERAL] Massive delete from a live production DB

2011-05-12 Thread Eric Ndengang

Am 12.05.2011 16:38, schrieb Phoenix Kiula:

On Thu, May 12, 2011 at 10:33 PM, Eric Ndengang
eric.ndengang_fo...@affinitas.de  wrote:

Am 12.05.2011 16:23, schrieb Phoenix Kiula:

Hi

Been reading some old threads (pre 9.x version) and it seems that the
consensus is to avoid doing massive deletes from a table as it'll
create so much unrecoverable space/gaps that vacuum full would be
needed. Etc.

Instead, we might as well do a dump/restore. Faster, cleaner.

This is all well and good, but what about a situation where the
database is in production and cannot be brought down for this
operation or even a cluster?

Any ideas on what I could do without losing all the live updates? I
need to get rid of about 11% of a 150 million rows of database, with
each row being nearly 1 to 5 KB in size...

Thanks! Version is 9.0.4.


Hey,
try to use pg_reorg --  http://reorg.projects.postgresql.org
but the table must get a primary key.
regards




Thanks Eric.

I do have a primary key.

I am on version 9.0.4. Will pg_reorg work with this version too? The
example on that website  mentions 8.3.

Also, it it a fast process that does not consume too much resource?
This DB is behind a very high traffic website, so I cannot have a
CLUSTER alternative like pg_reog making my DB very slow concurrently.

How does one install the patch easily on CentOS (Linux) 64 bit?

Thanks!

Hi,

/* I am on version 9.0.4. Will pg_reorg work with this version too? The
example on that website  mentions 8.3. */

I used to use pg_reorg on version 8.4.8  and regarding the documentation 
it will also work with the 9.0 version.


/* How does one install the patch easily on CentOS (Linux) 64 bit? */

You can easily install it as a contrib . Just read the installation 
guide or the man Page.


/*

Also, it it a fast process that does not consume too much resource?
This DB is behind a very high traffic website, so I cannot have a
CLUSTER alternative like pg_reog making my DB very slow concurrently.*/

Yes, it's a fast process that is neither time nor resource consumming. The 
reorgainization of a table with about 60 million could take less than 8 minutes 
without higher cpu cost.

cheers

--
Eric Ndengang
Datenbankadministrator

Affinitas GmbH  |  Kohlfurter Straße 41/43  |  10999 Berlin  |  Germany
email: eric.ndengang_fo...@affinitas.de  | tel: +49.(0)30. 991 949 5 0  |  
www.edarling.de

Geschäftsführer: Lukas Brosseder, David Khalil, Kai Rieke, Christian Vollmann
Eingetragen beim Amtsgericht Berlin, HRB 115958

Real People:  www.edarling.de/echte-paare
Real Love:www.youtube.de/edarling
Real Science: www.edarling.org


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to handle bogus nulls from ActiveRecord

2011-05-12 Thread Eric Hu
David suggested using a guesstimate default date along with a boolean to
indicate when you're using guesstimates.  I think this is a solid approach,
but if the default expected_by idea doesn't work for you, a boolean would
still make this a lot easier on the Rails side.

It sounds like you're using a setup for Heroku, so I checked the postgreSQL
8.3 manual--a boolean is 1
bytehttp://www.postgresql.org/docs/8.3/interactive/datatype-boolean.html.
If this isn't for Heroku, other postgreSQL version probably implement
booleans the same way.  Your database size should go up by # records * 1
byte + indexing overhead.

Though I don't know how many records you're working with, this seems
relatively cheap given that it will make your code more readable (if
expected_date_estimated?).  It should also simplify any remaining code you
have to write, as you won't have to think about writing elaborate if or
case statements to determine if expected_by was explicitly set.

On Thu, May 12, 2011 at 1:06 PM, James B. Byrne byrn...@harte-lyne.cawrote:


 On Thu, May 12, 2011 15:51, David Johnston wrote:
 
  +Infinity was chosen as a default to avoid the complexities of
  dealing with NULL logic in SELECTS.  I suppose that the simplest
  solution is to go with a date of -12-31 and treat that value
  like infinity.
 
  The just make it work solution has many merits - I would
  also probably just use -12-31 as a close approximation
  for +infinity; which itself is just there because you are
  avoiding estimate is unknown.
 
  Why bother updating the expected_by value once the conveyance
  is no longer pending?  Do you not really care if something
  arrived early?  Even if you do not currently it seems a waste
  to throw out the data when you can readily get the same result
  as-needed (CASE WHEN expected_by = arrived_at THEN arrived_at
  ELSE expected_by END) without giving up the ability to calculate


 The main reason to update expected_by is that sometimes the
 conveyance arrives without the expected_by ever being set.  Leaving
 the expected_by value at infinity, or 1231, or NULL, complicates
 other parts of the system.  However, leaving untouched expected_by
 values that are less than the infinite value is doable and is a
 better approach.


 --
 ***  E-Mail is NOT a SECURE channel  ***
 James B. Byrnemailto:byrn...@harte-lyne.ca
 Harte  Lyne Limited  http://www.harte-lyne.ca
 9 Brockley Drive  vox: +1 905 561 1241
 Hamilton, Ontario fax: +1 905 561 0757
 Canada  L8E 3C3


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Index bloat with USING GIN(varchar[]) index?

2011-05-10 Thread Eric Ridge
PostgreSQL 8.4.8 on i386-apple-darwin10.7.0, compiled by GCC
i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664),
64-bit

I'm not sure exactly what's going on, but I've got a table defined like this:

CREATE TABLE foo (
  ...
  tags varchar(1000)[],
  ...
);
CREATE INDEX idxfoo_tags ON foo USING GIN (tags);

I query the tags column quite a bit like so:

   SELECT * FROM foo WHERE tags @ ARRAY['value']::varchar[];

Works great.  Super fast... usually.

foo has roughly 50k records, and each row has anywhere from 2 to 5
elements in tags.   tags gets rewritten pretty regularly across
large swaths of records (1k - ~10k at a time) with different-ish
values.  What I see happing is the above SQL going from a few
milliseconds to a few seconds.  Until I REINDEX it.  Then it performs
well again until lots of rewrites happen.

Before or after the REINDEX, the query plan is always the same (and
it's a good plan):

explain analyze SELECT * FROM foo WHERE tags @ ARRAY['CATTLE']::varchar[];
QUERY PLAN
--
 Bitmap Heap Scan on foo  (cost=8.59..147.76 rows=38 width=496)
(actual time=8.870..10.073 rows=1654 loops=1)
   Recheck Cond: (tags @ '{CATTLE}'::character varying[])
   -  Bitmap Index Scan on idxfoo_tags87  (cost=0.00..8.58 rows=38
width=0) (actual time=8.806..8.806 rows=5034 loops=1)
 Index Cond: (tags @ '{CATTLE}'::character varying[])
 Total runtime: 10.258 ms
(5 rows)


I haven't had a chance to nail down a standalone test case to
reproduce this, but it smells like index bloat.

Are GIN indexes known for bloating, especially if they're on a
varchar[]?  Any suggestions for how to prove/disprove that it's index
bloat?

eric

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] converting databases form SQL_ASCII to UTF8

2011-04-22 Thread Eric McKeeth
On Fri, Apr 22, 2011 at 9:16 AM, Geoffrey Myers
g...@serioustechnology.comwrote:

 Vick Khera wrote:

 The database's enforcement of the encoding should be the last layer that
 does so.  Your applications should be enforcing strict utf-8 encoding from
 start to finish.  Once this is done, and the old data already in the DB is
 properly encoded as utf-8, then there should be no problems switching on the
 utf-8 encoding in postgres to get that final layer of verification.


 Totally agree.  Still, the question remains, why not leave it as SQL_ASCII?


Well, if your data is supposed to be UTF-8 encoded, then any of those
characters with invalid encoding in UTF-8 could reasonably be viewed as data
errors. Leaving the database in SQL-ASCII allows those errors to continue
accumulating, which will make a switch in the future even harder. If the
lack of being able to check encoding errors at the database level doesn't
bother you, and you're fine with risking bigger pain later in order to avoid
pain now, then I see no compelling reason to move away from SQL_ASCII.

-Eric


Re: [GENERAL] pg_dump generating unrestorable data (8.4)

2011-04-06 Thread Eric McKeeth
On Sun, Apr 3, 2011 at 12:49 PM, Glenn Maynard gl...@zewt.org wrote:

 After dumping a database (pg_dump -F c database  dump), trying to restore
 it (pg_restore dump) gives:

  pg_restore: [archiver (db)] Error from TOC entry 2463; 0 58451 TABLE DATA
 table user
  pg_restore: [archiver (db)] COPY failed: ERROR:  invalid byte sequence
 for encoding UTF8: 0xe3273a
  HINT:  This error can also happen if the byte sequence does not match the
 encoding expected by the server, which is controlled by client_encoding.
  CONTEXT:  COPY table, line 1

 The surface reason for this is clear enough: invalid UTF-8 data crept into
 some tsvector columns.  This is a much more serious problem, however: the
 backup tools for the database are, without warning, generating data that
 can't be restored.

 When in a data recovery situation, a backup that won't restore is
 catastrophic.  I can't restore the database to a state it was in at the time
 of the backup; I have to spend hours of downtime figuring out what to do to
 make something usable out of my backup; and then I have to hope I've
 corrected the backup correctly before bringing the server back online.  (If
 I was in an actual backup recovery situation--fortunately I'm not--I'd be
 more inclined to edit the Postgresql source to disable this check while
 restoring the backup than to risk trying to manually fix the backup data
 directly, which is very easy to get wrong.)


Two questions come to my mind on reading this. 1st, are you certain that the
database you're restoring to has the same encoding as the source database?
I'd have a hard time considering it an error if a dump from a database with
SQL_ASCII or some non-unicode encoding failed to restore to a UTF8 encoded
database in this manner, for example. And 2nd, does specifying the client
encoding when making the backup, with 'pg_dump -E UTF8', produce behavior
closer to what you would have expected?

-Eric


Re: [GENERAL] Inserting data from one database to another using stored functions

2011-01-07 Thread Eric McKeeth
On Thu, Jan 6, 2011 at 6:56 PM, Benjie Buluran 
benjie.bulu...@igentechnologies.com wrote:

  Hi pgSQL peeps!



 I’m stumped on this question for over 3 days now.



 I need to run a stored function in Database A (“sf DBa”) which calls a
 stored function in Database B (“sf DBb”).



 BEGIN

 PERFORM dblink_connect('dbname=testdb
 port=5432 user=postgres password=123456');

 PERFORM dblink_exec('SELECT
 sp_insert_detailtable('|| pActivityId ||', '|| pserialnumber ||')');

 PERFORM dblink_disconnect();

 END;

 END;

 $BODY$

   LANGUAGE plpgsql VOLATILE

   COST 100;



 Here’s “sf DBb”:

 CREATE OR REPLACE FUNCTION sp_insert_detailtable(pactivityid integer,
 pserialnumber character varying)

   RETURNS void AS

 $BODY$

 BEGIN



 INSERT INTO DETAILTABLE(LogID, LogDetailSeq)

 VALUES(pactivityid, pserialnumber);

 END;

 $BODY$

   LANGUAGE plpgsql VOLATILE

   COST 100;



 I’m using the DEBUG function in pgAdmin, and I keep getting the “*statement
 returning results not allowed*” error in *PERFORM dblink_exec('SELECT
 sp_insert_detailtable('|| pActivityId ||', '|| pserialnumber ||')');* in
 this line.



 Your help is highly appreciated!



 Thanks and Best Regards,

 Benjie


dblink_exec is only for commands which return no result. Try replacing that
line with the following and see if it helps:

PERFORM dblink('SELECT sp_insert_detailtable('|| pActivityId ||', '||
pserialnumber ||')');

-Eric


[GENERAL] Postgres DOD Certification Common Criteria Level

2010-12-14 Thread Eric McDonald

Greetings All:
Does anyone here have any insight on to what EAL level Postgres is at for 
DOD/Military installations?  I see that there's an SE-Linux fortified version 
on the Wiki, but no certifications are listed in the contents.
Any direction to certifications, STIG, or otherwise would be greatly 
appreciated--
Thanks,
Eric McDonaldSr Network AdministratorPDC
  

Re: [GENERAL] Any advice on debugging hanging postgresql-8.1.21 (many postmaster's)

2010-10-04 Thread Eric Comeau
Specifically look into setting. 

log_min_duration_statement (integer)

in the postgresql.conf file.

-Original Message-
From: Robert Gravsjö [mailto:tekniksupp...@blogg.se] 
Sent: Monday, October 04, 2010 7:22 AM
To: Alexander Farber
Cc: pgsql-general@postgresql.org
Subject: Re: Any advice on debugging hanging postgresql-8.1.21 (many 
postmaster's)



Alexander Farber skrev 2010-10-04 11.48:
 I wish I could see those hanging queries, what SQL do they try to execute:

To see the query you need to enable stats_command_string in your 
postgresql.conf. See: 
http://www.postgresql.org/docs/8.1/interactive/runtime-config-statistics.html

You might also benefit from chapter 24:
http://www.postgresql.org/docs/8.1/interactive/monitoring.html

Regards,
roppert


 pref=  select * from pg_stat_activity;
   datid | datname | procpid | usesysid | usename |current_query
 | query_start | backend_start | client_addr |
 client_port
 ---+-+-+--+-+--+-+---+-+-
   16384 | pref|2681 |16385 | pref|command string not
 enabled  | | 2010-10-04 10:22:53.051483+02 | |
  -1
 .

 I have usually just one Postgres connection from my game
 (a perl script running as daemon). And then I have several
 connections from phpBB. But when I restart httpd,
 the spinning postmaster's don't disappear:

 top - 11:48:11 up  1:28,  1 user,  load average: 9.85, 8.68, 6.25
 Tasks: 126 total,  12 running, 114 sleeping,   0 stopped,   0 zombie
 Cpu0  : 36.1%us,  0.5%sy,  0.0%ni, 63.0%id,  0.4%wa,  0.0%hi,  0.0%si,  0.0%st
 Cpu1  : 37.1%us,  0.8%sy,  0.0%ni, 61.6%id,  0.1%wa,  0.0%hi,  0.5%si,  0.0%st
 Cpu2  : 61.4%us,  1.3%sy,  0.0%ni, 35.7%id,  1.2%wa,  0.0%hi,  0.5%si,  0.0%st
 Cpu3  : 26.6%us,  0.6%sy,  0.0%ni, 72.1%id,  0.1%wa,  0.0%hi,  0.5%si,  0.0%st
 Mem:   4019028k total,  1428256k used,  2590772k free,22324k buffers
 Swap:  2104496k total,0k used,  2104496k free,  1199036k cached

PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
   3263 postgres  16   0  122m  14m   9m R 116.0  0.4  16:16.83 postmaster
   3208 postgres  16   0  122m  14m   9m R 112.0  0.4  35:16.08 postmaster
   3275 postgres  16   0  121m  13m   9m R 110.6  0.4  11:34.32 postmaster
   3315 postgres  16   0  121m  13m   9m R 100.5  0.4   7:16.93 postmaster
   3193 postgres  16   0  122m  14m   9m R 96.4  0.4  38:44.78 postmaster
   3233 postgres  16   0  122m  14m   9m R 68.8  0.4  29:28.90 postmaster
   3243 postgres  16   0  122m  14m   9m R 53.3  0.4  25:13.96 postmaster
   3256 postgres  16   0  122m  14m   9m R 34.4  0.4  19:23.93 postmaster


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exclusion constraint issue

2010-09-28 Thread Eric McKeeth
On Fri, Sep 24, 2010 at 3:22 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Eric McKeeth eldi...@gmail.com writes:
  why would I get the following error, since the period() function is in
 fact
  declared as immutable?

  test=# ALTER TABLE test3 ADD exclude using
  gist(period(effect_date::timestamptz, expire_date::timestamptz) with 
 );
  ERROR:  functions in index expression must be marked IMMUTABLE

 period() might be immutable, but those casts from date to timestamptz
 are not, because they depend on the TimeZone parameter.

regards, tom lane



Thanks for pointing out what I was overlooking. After a bit of further
investigation and testing it seems like the period type I found isn't going
to work without modification for my constraint, so I ended up with the
following to get the semantics I need:

alter table test3 add exclude using gist(
box(
point(
case when effect_date = '-Infinity'::date
then '-Infinity'::double precision
else date_part('epoch'::text, effect_date)
end,
1
),
point(
case when expire_date = 'Infinity'::date
then 'Infinity'::double precision
else date_part('epoch', expire_date) - 1
end,
1
)
)
with 
);

This is ugly, but it does seem to enforce the constraint I need, of
non-overlapping dates where sharing an endpoint is not considered an
overlap. The case blocks are because the date_part bit always returns 0 for
infinite dates, which seemed a bit counter-intuitive. Any suggestions on how
I could improve on it?


Re: [GENERAL] Exclusion constraint issue

2010-09-28 Thread Eric McKeeth
On Tue, Sep 28, 2010 at 4:07 PM, Jeff Davis pg...@j-davis.com wrote:

 On Tue, 2010-09-28 at 12:18 -0600, Eric McKeeth wrote:

  This is ugly, but it does seem to enforce the constraint I need, of
  non-overlapping dates where sharing an endpoint is not considered an
  overlap.

 The period type supports different inclusivity/exclusivity combinations.
 So, the period:

   '[2009-01-02, 2009-01-03)'

 Does not overlap with:

   '[2009-01-03, 2009-01-04)'

 Because [ or ] means inclusive and ( or ) means exclusive.


My problem wasn't with getting the period type to represent overlaps with
the correct inclusivity/exclusivity, but in getting it to work with my
exclusion constraint. Can you show an example of how I could get that
working perhaps?



 For further discussion, you can join the temporal-gene...@pgfoundry.org
 mailing list (sign up at
 http://pgfoundry.org/mailman/listinfo/temporal-general ). If this still
 does not solve your use case, I'd like to see if it can be modified to
 do so.

 Regards,
 Jeff Davis


I've subscribed to the temporal-general list, so we can move this discussion
there if that's more appropriate.

Thanks,
Eric


[GENERAL] Exclusion constraint issue

2010-09-24 Thread Eric McKeeth
I'm getting an error message that doesn't make sense to me. Using PostgreSQL
9.0.0 on CentOS 5.5.


Given the following table and function definitions

CREATE TABLE test3
(
  test3_id serial NOT NULL,
  fk_id integer,
  data_3 text,
  effect_date date NOT NULL,
  expire_date date NOT NULL,
  CONSTRAINT test3_pkey PRIMARY KEY (test3_id)
)

CREATE OR REPLACE FUNCTION period(timestamp with time zone, timestamp with
time zone)
  RETURNS period AS
$BODY$
SELECT CASE WHEN $1 = $2
THEN ($1, $2)::period
ELSE ($2, $1)::period END;
$BODY$
  LANGUAGE sql IMMUTABLE STRICT


and the period datatype with it's associated functions and operators
installed from http://pgfoundry.org/projects/timespan/

why would I get the following error, since the period() function is in fact
declared as immutable?

test=# ALTER TABLE test3 ADD exclude using
gist(period(effect_date::timestamptz, expire_date::timestamptz) with  );
ERROR:  functions in index expression must be marked IMMUTABLE

Thanks in advance for any assistance.
-Eric


[GENERAL] unexpected EOF on client connection

2010-09-20 Thread Eric Ndengang

 Hello everybody,
Our Company get two Cluster environment with postgres 8.4.4 installed.
Both Clusters have the same  Structure (Tables, Functions, ...)
In one of them  I am recently getting many entries in the log on this type:

2010-09-20 14:21:25 CEST [31010]: [1-1] user=edarling,db=edarlingdb 
LOG:  unexpected EOF on client connection
2010-09-20 14:21:34 CEST [31158]: [1-1] user=edarling,db=edarlingdb 
LOG:  unexpected EOF on client connection
2010-09-20 14:21:42 CEST [31409]: [1-1] user=edarling,db=edarlingdb 
LOG:  unexpected EOF on client connection
2010-09-20 14:21:51 CEST [31492]: [1-1] user=edarling,db=edarlingdb 
LOG:  unexpected EOF on client connection


What really strange is, is that every  Application get connected through 
the Database using the Users 'edarlingapp' or edarlingadmin. The 
edarling user is only used for administrative purpuses .


Any idea how i could solve this issue?
Thanks in advance





--
Eric Ndengang
Junior Datenbankentwickler

Affinitas GmbH  |  Kohlfurter Straße 41/43  |  10999 Berlin  |  Germany
email: eric.ndengang_fo...@affinitas.de  | tel: +49.(0)30. 991 949 5 0  |  
www.edarling.de

Geschäftsführer: Lukas Brosseder, David Khalil, Kai Rieke, Christian Vollmann
Eingetragen beim Amtsgericht Berlin, HRB 115958



[GENERAL] pgcrypto pgp_pub_decrypt() fails with secret key password

2010-09-12 Thread Eric Lukather
Hi,

I have pgcrypto working fine with gpg keys that do *not* have a passphrase. 
But, 
if I try the exact same gpg -a --export commands and application code with gpg 
keys that are generated *with* a passphrase, then I get the following 
pgp_pub_decrypt() error within psql:

\set pubkey  `sed -e s/'/''/g -e 's/\\n/\\r\\n/g' -e 's/\\//g'  
test-public.key` 
\set prvkey  `sed -e s/'/''/g -e 's/\\n/\\r\\n/g' -e 's/\\//g'  
test-secret.key` 
SELECT pgp_pub_decrypt( dearmor( (SELECT armor((SELECT 
pgp_pub_encrypt('mypass', 
dearmor(:pubkey) ), dearmor(:prvkey), 'test' );
ERROR:  Corrupt data

Here are my versions:

PostgreSQL version: 8.4.4
GPG version: 1.4.10
(both were built with OpenSSL support)

I've verified my SQL code via psql and I feel pretty confident it's OK, since I 
have it narrowed down to where the only difference is whether the gpg exported 
keys were originally assigned with a passphrase, or not assigned a passphrase 
(just press enter when prompted), when running gpg --gen-key.

Any troubleshooting help or suggestions would be greatly appreciated!

Thanks, -Eric



  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature proposal

2010-08-25 Thread Eric Comeau
On Wed, 2010-08-25 at 17:06 +0200, Denis BUCHER wrote:
 Le 25.08.2010 09:15, wstrzalka a crit :
  I'm currently playing with very large data import using COPY from
  file.
 
  As this can be extremely long operation (hours in my case) the nice
  feature would be some option to show operation progress - how many
  rows were already imported.
 
  Or maybe there is some way to do it? As long as postgres have no read-
  uncommited I think I can estimate it only by destination table size ??
 
 By the way, did you try to optimize your postgresql server ?
 
 In my case I was able to reduce a big data update from :
 1 hour 15 minutes
 to :
 5 minutes
 
 Without even changing any line of data or code in sql !
 
 Incredible, isn't it ?
 

Curious- what postgresql.conf settings did you change to improve it?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to refer to computed columns from other computed columns?

2010-08-16 Thread Eric Ndengang

Am 16.08.2010 14:45, schrieb Matthew Wilson:

I'm converting some procedural code to SQL as an experiment.  Here's the
pseudocode:

 c = a - b
 if c  0 then d = 'no'
 else d = 'yes'

In SQL, I've got this:

 select a, b, a - b as c,
 case when a - b  0 then 'no'
 else 'yes'
 end as d

 from foo;

This is a trivial example, but you can see how I calculate a - b two
separate times.

In reality, I have much nastier calculations and they happen more than
just twice.

I'm looking for an elegant solution for this puzzle.  I don't want to
repeat that a - b part over and over because I likely will need to
change how c gets defined and I don't want to have to change more than
one place in the code.

All I can come up with so far is to use a view and then another view on
top of that one:

 create view v1 as
 select a, b, a - b as c
 from foo;

 create view v2 as
 select a, b, c,
 case when c  0 then 'no'
 else 'yes'
 end as d
 from v1;

This is better than the first solution because c is only defined in a
single place.  Is this the best possible solution?

Thanks for the help.

Matt


   
You can also use the ' with Queries ' option to solve this Problem like 
this:


with table_1 as (select a,b, a-b as c from foo)
Select a,b, c,
case when c0 then 'no'
 else 'yes' end as d
 from table_1;
I hope , it will help you

--
Eric Ndengang
Junior Datenbankentwickler

Affinitas GmbH  |  Kohlfurter Straße 41/43  |  10999 Berlin  |  Germany
email: eric.ndengang_fo...@affinitas.de  | tel: +49.(0)30. 991 949 5 0  |  
www.edarling.de

Geschäftsführer: Lukas Brosseder, David Khalil, Kai Rieke, Christian Vollmann
Eingetragen beim Amtsgericht Berlin, HRB 115958


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Read Committed Transaction Isolation and SELECT ... UNION ... SELECT

2010-08-10 Thread Eric Ridge
I think I've been studying the documentation too long and have thought
myself into a circle.

http://www.postgresql.org/docs/8.4/static/transaction-iso.html says:

Also note that two successive SELECT commands can see different data,
even though they are within a single transaction, if other
transactions commit changes during execution of the first SELECT.

I get that what that means in normal cases, but what about a single
query comprised of one or more unions:

  SELECT ... FROM foo WHERE ...
UNION
  SELECT  ... FROM foo WHERE ...

Since the above is one query issued by the client, are the two SELECT
statements still operating within the same snapshot?  Is the above
considered to be one command?

I think the answer to those questions is Yes, but I'd appreciate
some clarification.

Thanks in advance!

eric

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Read Committed Transaction Isolation and SELECT ... UNION ... SELECT

2010-08-10 Thread Eric Ridge
On Tue, Aug 10, 2010 at 6:46 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 That's just one SELECT command.  Sub-SELECTs inside a query don't
 count as separate commands for this purpose; the use of SELECT in
 that way is just an artifact of the SQL grammar.

Thanks.  That's what I figured, but wanted to make sure.

eric

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Recovering Data from a crashed database

2010-04-29 Thread Eric Langheinrich
I'm looking for options to recover data from a crashed postgres database
server. We recently had a solid state storage device blow up taking the
database server with it.

The database is version 8.3, the pg_clog, pg_xlog and subdirectories of
pg_tblspc were wiped out with the crashed storage device. We do have the
files under /data/base.

pgfsck looked like the right tool for the job, but seems to be outdated and
lacking support for 8.3

I'm open to all options including outsourcing the data recovery. Any help is
appreciated.

Thank you,

Eric


[GENERAL] Recovering Data from a crashed database

2010-04-29 Thread Eric Langheinrich
 I'm looking for options to recover data from a crashed postgres database
server. We recently had a solid state storage device blow up taking the
database server with it.

The database is version 8.3, the pg_clog, pg_xlog and subdirectories of
pg_tblspc were wiped out with the crashed storage device. We do have the
files under /data/base.

pgfsck looked like the right tool for the job, but seems to be outdated and
lacking support for 8.3

I'm open to all options including outsourcing the data recovery. Any help is
appreciated.

Thank you,

Eric


[GENERAL] Cannot access various Postgres sites

2010-04-17 Thread Eric Ridge
I'm only subscribed to -general and -hackers, so if this message should go
to a different list, please feel free to forward it along, but I've been
unable to get to a couple of the Postgres websites for quite awhile.

Back on March 20 Bruce Momjian posted a link in -hackers to the 9.0 release
notes (http://developer.postgresql.org/pgdocs/postgres/release-9-0.html).
 It blew my mind that so many were able to discuss that page because
developer.postgresql.org didn't respond then, and it doesn't respond now.
 :(  I'd still like to read that page.

Today I wanted to peek inside the JDBC driver sources before I join their
mailing list and ask a bunch of dumb questions, but
jdbc.postgresql.orgisn't responding either.

I've also seen intermittent connectivity issues with planet.postgresql.org,
but it seems to be working today.

Is it just my network or is something up with these hosts?

eric


Re: [GENERAL] Cannot access various Postgres sites

2010-04-17 Thread Eric Ridge
It's not DNS.  I can resolve the hostnames just fine.  In fact, those two
both resolve to the same IP:  200.46.204.71

They just don't respond.

$ telnet 200.46.204.71 80
Trying 200.46.204.71...

just hangs indefinitely.  :(

eric



I could read the release notes for 9.0 by then and still can now.  This is
from two different countries (Brazil first and now US), so this should be an
indicative that you might be having network issues.

Can you please try canging your DNS servers from whatever you have now to
Google's?

Make 8.8.4.4 primary and 8.8.8.8 secondary and try again.

You can also try using OpenDNS servers.

--
Jorge Godoy jgo...@gmail.com


On Sat, Apr 17, 2010 at 11:57, Eric Ridge eeb...@gmail.com wrote:

 I'm only subscribed to -general and -hackers, so if this message should go
 to a different list, please feel free to forward it along, but I've been
 unable to get to a couple of the Postgres websites for quite awhile.

 Back on March 20 Bruce Momjian posted a link in -hackers to the 9.0 release
 notes (http://developer.postgresql.org/pgdocs/postgres/release-9-0.html).
  It blew my mind that so many were able to discuss that page because
 developer.postgresql.org didn't respond then, and it doesn't respond now.
  :(  I'd still like to read that page.

 Today I wanted to peek inside the JDBC driver sources before I join their
 mailing list and ask a bunch of dumb questions, butjdbc.postgresql.org isn't
 responding either.

 I've also seen intermittent connectivity issues with planet.postgresql.org,
 but it seems to be working today.

 Is it just my network or is something up with these hosts?

 eric




Re: [GENERAL] Cannot access various Postgres sites

2010-04-17 Thread Eric Ridge
On Sat, Apr 17, 2010 at 1:42 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

I can get to both sites and telnet also. Must be something on your end :(


Yup, it was.  :(  I appreciate the confirmation that it was me, thanks!

eric


[GENERAL] GROUP BY column alias?

2010-02-18 Thread Eric B. Ridge
Maybe I'm getting too old to SQL anymore, but I ran across something yesterday 
in a machine generated query that took me over an hour to figure out.  

Here's a little testcase.  Maybe somebody can explain why the last Not 
Expected case does what it does.

select version();
PostgreSQL 8.4.1 on i386-apple-darwin10.0.0, compiled by GCC 
i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5646), 64-bit

create table foo(day timestamp);
insert into foo values (now());
insert into foo values (now());

Expected:
select day, count(*) from foo group by day;
day | count 
+---
 2010-02-18 15:41:37.335357 | 1
 2010-02-18 15:41:39.471746 | 1
(2 rows)

Expected:
select day::date, count(*) from foo group by day;
day | count 
+---
 2010-02-18 | 1
 2010-02-18 | 1
(2 rows)

Expected:
select day::date, count(*) from foo group by day::date;
day | count 
+---
 2010-02-18 | 2
(1 row)

Expected:
select day::date as bar, count(*) from foo group by bar;
bar | count 
+---
 2010-02-18 | 2
(1 row)

Not Expected:
select day::date as day, count(*) from foo group by day;
day | count 
+---
 2010-02-18 | 1
 2010-02-18 | 1
(2 rows)

Note in the last case, the day column is aliased as day, but the group by 
using the physical day column, not the alias.  That boggles my mind, 
especially when you consider the case above it, where day is aliased as bar 
and grouping by bar works as expected.

eric
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GROUP BY column alias?

2010-02-18 Thread Eric B. Ridge
On Feb 18, 2010, at 4:31 PM, Scott Bailey wrote:
 I'm not sure why you would be surprised by that behavior. You are grouping by 
 a timestamp, so any microsecond difference will be a new group.

I get that. ;)  Hence the ::date.  This is what doesn't make sense:

Expected: select day::date as bar, count(*) from foo group by bar;
Not Expected: select day::date as day, count(*) from foo group by day;

If I alias it to something other than the actual column name, it does what I 
expect.  But if I alias it to the column name, it doesn't.

I would have thought that the precedence rules would resolve the alias first, 
then the column name, but that doesn't seem to be the case.

 If you want to make that work try:
 SELECT day::date, --no need to alias as same name

The no need to alias as same name isn't true in my case because the queries 
I'm dealing with are machine generated, and that's what the generator does, in 
all cases.

eric
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] GROUP BY column alias?

2010-02-18 Thread Eric B. Ridge
On Feb 18, 2010, at 5:52 PM, Scott Bailey wrote:

 SQL name resolution rules are that column names have higher precedence than 
 aliases and variables. So it will always bind to the column not the alias.

That explains it.  Thanks.  Breaks the rule of least surprise, but it is SQL.

eric
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres locked up

2009-12-11 Thread Eric B. Ridge
On Dec 10, 2009, at 6:58 PM, Tom Lane wrote:

 It seems likely that the root cause is having somehow lost a wakeup signal 
 somewhere

What would cause that?

eric


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   5   >