Re: [GENERAL] unnest-like pg_stats.most_common_values and pg_stats.most_common_freqs

2017-11-17 Thread Justin Pryzby
On Fri, Nov 17, 2017 at 09:32:23PM +0100, Luca Ferrari wrote:
> Hi all,
> maybe this is trivial, but I need an hint on a way to see a table form
> of the MCVs and MCFs out of pg_stats with a query. Is it possible to
> get a set of rows each with a most common value on one column and the
> corresponding column on the the other? (assuming I can cast the array
> of MCVs to the right type array)

I think you want something like this ?

postgres=# SELECT schemaname, tablename, attname, 
unnest(histogram_bounds::text::text[]), histogram_bounds FROM pg_stats LIMIT 9;
 pg_catalog | pg_pltemplate | tmplname| plperl  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plperlu | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpgsql | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpython2u  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpython3u  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| plpythonu   | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| pltcl   | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplname| pltclu  | 
{plperl,plperlu,plpgsql,plpython2u,plpython3u,plpythonu,pltcl,pltclu}
 pg_catalog | pg_pltemplate | tmplhandler | plperl_call_handler | 
{plperl_call_handler,plperlu_call_handler,plpgsql_call_handler,plpython2_call_handler,plpython3_call_handler,plpython_call_handler,pltcl_cal
l_handler,pltclu_call_handler}

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] EXPLAIN command just hangs...

2017-11-03 Thread Justin Pryzby
On Fri, Nov 03, 2017 at 09:12:02PM +, Rhhh Lin wrote:
> I checked for dead tuples against that particular table initially as I have 
> seen performance problems before in a related 'busy' environment which needed 
> its frequency of vacuuming to be increased. So I have a query to check for 
> table with dead tuples and this table is not showing any.
>
> I also came across the suggestion that bloat might be an issue on the 
> database and how to identify and address it and it does not appear to be 
> evident here also, so thats my thinking as to why these are not factors.

I'd be helpful if you'd paste the commands+output as you run them "\dt+, \di+,
ps, vacuum, dead tuples, etc"

> I have vacuumed. I have not reindexed as it is a prod environment and I see 
> that... "REINDEX locks out writes but not reads of the index's parent 
> table.", so I may have to arrange this to avoid any interruptions (Although 
> currently, accessing this table seems completely problematic anyway!).

Perhaps you could look into pg_repack?  Note that by default it will kill
longrunning transaction if it needs in order to (briefly) obtain a
super-exclusive lock.

> The table is 691MB and the composite index(PK) is 723 MB.

It'd be useful to see the pg_stat_user_tables.* and pg_class.reltuples and
relpages for that table.  Also output from VACUUM VERBOSE or autovacuum logs,
if you have them (but note that vacuum does different work every time it's
re-run).

> My thinking now is I may need to export this data out to a staging area 
> whereby I can attempt to "play" with it without any repercussions...

I imagine that maybe this is related to the pattern of activity on that table
(specifically around the extremes values of its indexed columns).  So it'll be
hard to reproduce, and dumping and reloading the table (or just reindexing it
without reloading it at all) will probably temporarily improve or resolve the
issue.

You could try *adding* a new index on the timestamp column alone (CREATE INDEX
CONCURRENTLY).  Re-creating the index might conceivably be the solution in
the end, and it's what pg_repack does behind the scenes.

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] checkpoint and recovering process use too much memory

2017-11-02 Thread Justin Pryzby
On Fri, Nov 03, 2017 at 01:43:32AM +, tao tony wrote:
> I had an asynchronous steaming replication HA cluster.Each node had 64G 
> memory.pg is 9.6.2 and deployed on centos 6.
> 
> Last month the database was killed by OS kernel for OOM,the checkpoint 
> process was killed.

If you still have logs, was it killed during a large query?  Perhaps one using
a hash aggregate?

> I noticed checkpoint process occupied memory for more than 20GB,and it was 
> growing everyday.In the hot-standby node,the recovering process occupied 
> memory as big as checkpoint process.

"resident" RAM of a postgres subprocess is often just be the fraction of
shared_buffers it's read/written.  checkpointer must necessarily read all dirty
pages from s-b and write out to disk (by way of page cache), so that's why its
RSS is nearly 32GB.  And the recovery process is continuously writing into s-b.

> Now In the standby node,checkpoint and recovering process  used more then 
> 50GB memory as below,and I worried someday the cluster would be killed by OS 
> again.
> 
>PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
> 167158 postgres  20   0 34.9g  25g  25g S  0.0 40.4  46:36.86 postgres: 
> startup process   recovering 00040855004B
> 167162 postgres  20   0 34.9g  25g  25g S  0.0 40.2  17:58.38 postgres: 
> checkpointer process
> 
> shared_buffers = 32GB

Also, what is work_mem ?

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] EXPLAIN command just hangs...

2017-11-02 Thread Justin Pryzby
On Thu, Nov 02, 2017 at 09:13:05PM +, Rhhh Lin wrote:
> Yes, it may be an issue with the index, but I'd like to have some evidence
> towards that before dropping and recreating (It does not appear that bloat is
> a problem here or dead tuples either).

Why do you say those aren't an issue?  Just curious.

Have you vacuum or reindexed (or pg_repack) ?

How large are the table and index? \dt+ and \di+

> The reason I am very suspect of the timestamp column makeup is that if I
> remove that predicate from the EXPLAIN command and the actual query, both
> complete within seconds without issue. So I do know where the issue is (I
> just dont know what the issue is!).

It could be that you're hitting selfuncs.c:get_actual_variable_range() and the
extremes of the index point to many dead tuples (as I see Tom suggests).

You could strace the backend and see if it's reading (or writing??)
consecutively (hopefully with ample OS readahead) or randomly (without).

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] EXPLAIN command just hangs...

2017-11-02 Thread Justin Pryzby
On Thu, Nov 02, 2017 at 08:51:23PM +, Rhhh Lin wrote:
> However, this query will run for days without completing. I suspect it has to 
> do with the timestamp predicate and lack of using an appropriate index access 
> path. This is what I need to verify/establish.

Perhaps the timestamp index is badly fragmented, and perhaps it would help to
reindex/cluster/pg_repack..

> So I try and perform a simple 'EXPLAIN ' in order to check what the 
> planner has for the execution of this query.
> And after approx. six hours waiting, nothing has returned. It is still 
> executing, but has not given me back my prompt (I can see the session is 
> still active).My understanding is that the simple EXPLAIN version does not 
> actually execute the query, so I do not understand why this is also 
> performing poorly/hanging/stuck? Any ideas?

Is explain "wait"ing ?  If you do "ps -fu postgres |grep EXPLAIN" does it say
"EXPLAIN waiting" ?

Or, if you "ps uww ThePID" does it show lots of CPU(or RAM) ?

If you do "SELECT * FROM pg_stat_activity WHERE pid=??" (from "ps" or from
SELECT pg_backend_pid() before starting "explain") does it show "active" state
or waiting ?

If it's waiting, you can see what it's waiting ON by looking at pg_locks..
Maybe like: SELECT c.query, * FROM pg_locks a JOIN pg_locks b USING(relation)
JOIN pg_stat_activity c ON b.pid=c.pid WHERE a.pid=?? (from ps)

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Possible bug: could not open relation with OID [numbers] SQL State: XX000

2017-11-01 Thread Justin Pryzby
On Wed, Nov 01, 2017 at 04:11:07PM -0400, Adam Brusselback wrote:
> I have something going on, and i'm not sure what is causing it.  I
> recently upgraded our development environment to PG10, and the error
> in the subject appeared with one of my analytical functions.

What relation is that ?  I guess it's harder to know since it's within a
function, but could you add NOTICE for all the relations you're outputting ?

Something like
ts=# SELECT 'alarms'::regclass::oid;
oid | 19575

Also, if you have log_statement=all (and maybe log_destination=stderr,csvlog),
can you send the log fragment for the line with error_severity='ERROR' ?
https://www.postgresql.org/docs/current/static/runtime-config-logging.html#runtime-config-logging-csvlog

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] explain analyze output: 0 rows, 1M loops

2017-11-01 Thread Justin Pryzby
On Wed, Nov 01, 2017 at 12:19:21PM -0700, David G. Johnston wrote:
> On Wed, Nov 1, 2017 at 11:59 AM, Scott Marlowe <scott.marl...@gmail.com>
> wrote:
> 
> > So some of my output from an explain analyze here has a line that says
> > this:
> >
> > ex Scan using warranty_order_item_warranty_order_id_idx on
> > warranty_order_item woi_1 (cost=0.57..277.53 rows=6 width=137) (actual
> > time=0.110..0.111 rows=0 loops=1,010,844)
> >
> 
> Not my strong suit but, I'm pretty sure that reads: "The index was queried
> 1M+ times and none of those inqueries resulted in a record being found".
> IIUC I'd be wondering why some form of hash join wasn't used...

Except that:

https://www.postgresql.org/docs/current/static/using-explain.html
"... the loops value reports the total number of executions of the node, and
the actual time and ROWS VALUES SHOWN ARE AVERAGES PER-EXECUTION."

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] UPDATE syntax change (column-list UPDATE syntax fails with single column)

2017-10-30 Thread Justin Pryzby
On Mon, Oct 30, 2017 at 12:29:03PM -0400, Adam Brusselback wrote:
> I have some queries that were working in 9.6 which suddenly broke when
> moving to 10.
> 
> Digging in, the error i'm getting is: ERROR:  source for a
> multiple-column UPDATE item must be a sub-SELECT or ROW() expression

> So there was a change made, and you now cannot use the multi-column
> syntax if you're only updating a single column.  Was this intentional?

I found the same while testing during beta:
https://www.postgresql.org/message-id/flat/20170719174507.GA19616%40telsasoft.com#20170719174507.ga19...@telsasoft.com

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why does a SELECT query cause "dirtied" buffers?

2017-10-27 Thread Justin Pryzby
On Fri, Oct 27, 2017 at 09:24:40PM +0200, Thomas Kellerer wrote:
> Under which situation does a SELECT query change a block?

https://wiki.postgresql.org/wiki/Hint_Bits


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] parray_gin and \d errors in PG10

2017-10-22 Thread Justin Pryzby
On Sun, Oct 22, 2017 at 02:36:12PM -0400, Tom Lane wrote:
> Justin Pryzby <pry...@telsasoft.com> writes:
> > After installing parray_gin extension and pg_upgrading another instance,
> > \d is failing like so:
> 
> > [pryzbyj@database ~]$ psql ts -c '\d pg_class'
> > ERROR:  operator is not unique: "char"[] @> unknown
> > LINE 6:   (stxkind @> '{d}') AS ndist_enabled,

> match the anyarray operator.  Possibly we could use
> 
>   (stxkind @> '{d}'::pg_catalog."char"[])
> 
> That works for me without parray_gin installed, but I wonder whether
> it fails due to ambiguity if you do have parray_gin installed.  In
> principle this'd still match the text[] @> text[] operator, and I'm
> not sure whether we have an ambiguity resolution rule that would
> prefer one over the other.

ts=# SELECT oid, stxrelid::pg_catalog.regclass, 
stxnamespace::pg_catalog.regnamespace AS nsp, stxname,
  (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(attname),', ')
   FROM pg_catalog.unnest(stxkeys) s(attnum)
   JOIN pg_catalog.pg_attribute a ON (stxrelid = a.attrelid AND
a.attnum = s.attnum AND NOT attisdropped)) AS columns,
  (stxkind @> '{d}'::pg_catalog."char"[]) AS ndist_enabled,
  (stxkind @> '{d}'::pg_catalog."char"[]) AS deps_enabled
FROM pg_catalog.pg_statistic_ext stat WHERE stxrelid = '1259'
ORDER BY 1;
ERROR:  operator is not unique: "char"[] @> "char"[]
LINE 6:   (stxkind @> '{d}'::pg_catalog."char"[]) AS ndist_enabled,
   ^
HINT:  Could not choose a best candidate operator. You might need to add 
explicit type casts.

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] parray_gin and \d errors in PG10

2017-10-22 Thread Justin Pryzby
After installing parray_gin extension and pg_upgrading another instance,
\d is failing like so:

[pryzbyj@database ~]$ psql ts -c '\d pg_class'
ERROR:  operator is not unique: "char"[] @> unknown
LINE 6:   (stxkind @> '{d}') AS ndist_enabled,
   ^
HINT:  Could not choose a best candidate operator. You might need to add 
explicit type casts.

[pryzbyj@database ~]$ psql ts -c '\d pg_class' -E
[...]
* QUERY **
SELECT oid, stxrelid::pg_catalog.regclass, 
stxnamespace::pg_catalog.regnamespace AS nsp, stxname,
  (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(attname),', ')
   FROM pg_catalog.unnest(stxkeys) s(attnum)
   JOIN pg_catalog.pg_attribute a ON (stxrelid = a.attrelid AND
a.attnum = s.attnum AND NOT attisdropped)) AS columns,
  (stxkind @> '{d}') AS ndist_enabled,
  (stxkind @> '{f}') AS deps_enabled
FROM pg_catalog.pg_statistic_ext stat WHERE stxrelid = '1259'
ORDER BY 1;
**

ERROR:  operator is not unique: "char"[] @> unknown
LINE 6:   (stxkind @> '{d}') AS ndist_enabled,
   ^
HINT:  Could not choose a best candidate operator. You might need to add 
explicit type casts.

Thankfully this is still working:
ts=# \do @>

 List of operators
   Schema   | Name | Left arg type | Right arg type | Result type | 
 Description   
+--+---++-+
 pg_catalog | @>   | aclitem[] | aclitem| boolean | contains
 pg_catalog | @>   | anyarray  | anyarray   | boolean | contains
 pg_catalog | @>   | anyrange  | anyelement | boolean | contains
 pg_catalog | @>   | anyrange  | anyrange   | boolean | contains
 pg_catalog | @>   | box   | box| boolean | contains
 pg_catalog | @>   | box   | point  | boolean | contains
 pg_catalog | @>   | circle| circle | boolean | contains
 pg_catalog | @>   | circle| point  | boolean | contains
 pg_catalog | @>   | jsonb | jsonb  | boolean | contains
 pg_catalog | @>   | path  | point  | boolean | contains
 pg_catalog | @>   | polygon   | point  | boolean | contains
 pg_catalog | @>   | polygon   | polygon| boolean | contains
 pg_catalog | @>   | tsquery   | tsquery| boolean | contains
 public | @>   | hstore| hstore | boolean | 
 public | @>   | text[]| text[] | boolean | text array 
contains compared by strict
(15 rows)

This query works fine when adding cast to text[]:

ts=# SELECT oid, stxrelid::pg_catalog.regclass, 
stxnamespace::pg_catalog.regnamespace AS nsp, stxname,
  (SELECT pg_catalog.string_agg(pg_catalog.quote_ident(attname),', ')
   FROM pg_catalog.unnest(stxkeys) s(attnum)
   JOIN pg_catalog.pg_attribute a ON (stxrelid = a.attrelid AND
a.attnum = s.attnum AND NOT attisdropped)) AS columns,
  (stxkind @> '{d}'::text[]) AS ndist_enabled,
  (stxkind @> '{f}'::text[]) AS deps_enabled
FROM pg_catalog.pg_statistic_ext stat WHERE stxrelid = '1259'
ORDER BY 1;
 oid | stxrelid | nsp | stxname | columns | ndist_enabled | deps_enabled 
-+--+-+-+-+---+--
(0 rows)

Is this to be considered an issue with parray_gin or with psql ?

I don't think that's an urgent problem to fix, but if someone has a workaround
for \d I would appreciate if you'd pass it along :)

Thanks in advance
Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Weird performance difference

2017-10-20 Thread Justin Pryzby
On Fri, Oct 20, 2017 at 03:08:26PM -0800, Israel Brewster wrote:
> Summary: the following query takes around 12 seconds on my test machine. On 
> my production machine, it's at half an hour and counting. What's going on?
> 
> which, when run on my test server, has this explain analyze output: 
> https://explain.depesz.com/s/4piv <https://explain.depesz.com/s/4piv>. Around 
> 12 second runtime, which isn't too bad (in the grand scheme of things), 
> although there is probably room for improvement.

Are these cast to ::date cast is really needed (Alternately, do you have an 
index on column::date ?)
|WHERE outtime::date>='2017-01-01'
|ON outtime::date BETWEEN oag_schedules.startdate
|AND outtime::date BETWEEN oag_batches.eff_from

The problem is clearly here:

 Merge Join (cost=30,604.12..31,301.12 ROWS=1 width=76) (actual 
time=1,153.883..9,812.434 ROWS=3,420,235 loops=1)
Merge Cond: oag_schedules.flightnum)::text) = 
(legdetail.flightnum)::text) AND ((oag_schedules.origin)::text = 
(legdetail.legfrom)::text))
Join Filter: (((legdetail.outtime)::date >= oag_schedules.startdate) AND 
((legdetail.outtime)::date <= COALESCE(oag_schedules.enddate, 
'infinity'::date)) AND (date_part('isodow'::text, 
((legdetail.outtime)::date)::timestamp without time zone) = ANY 
((oag_schedules.frequency)::double precision[])))
ROWS REMOVED BY JOIN FILTER: 6822878

Can you send "explain" (not explain analyze) for the production server?

And \d for those tables.

And/or EXPLAIN ANALYZE for a query with shorter date range on production (to
confirm it has a similar problem in rowcount estimation).

You can try munging the query to move/change the "Join Filter" components of
the query to see which one is contributing most to the rowcount estimate being
off by a factor of 3e6.

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Preventing psql from attempting to access ~/.pgpass file.

2017-10-17 Thread Justin Pryzby
On Tue, Oct 17, 2017 at 09:06:59AM +0300, Allan Kamau wrote:
> Is there a way to instruct psql not to try reading ~/.pgpass file?

https://www.postgresql.org/docs/current/static/libpq-envars.html
PGPASSFILE behaves the same as the passfile connection parameter.
passfile
Specifies the name of the file used to store passwords (see Section 33.15). 
Defaults to ~/.pgpass, or %APPDATA%\postgresql\pgpass.conf on Microsoft 
Windows. (No error is reported if this file does not exist.)

https://www.postgresql.org/docs/9.6/static/libpq-envars.html
PGPASSFILE specifies the name of the password file to use for lookups. If not 
set, it defaults to ~/.pgpass (see Section 31.15).

verifying it doesn't access the default:
pryzbyj@pryzbyj:~$ echo quit |PGPASSFILE=/nonextant strace psql 2>&1 |grep -E 
'nonex|pgpass'
stat("/nonextant", 0x7fffbd13c9f0)  = -1 ENOENT (No such file or directory)
pryzbyj@pryzbyj:~$ 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Determine size of table before it's committed?

2017-10-11 Thread Justin Pryzby
On Wed, Oct 11, 2017 at 10:43:26AM -0300, Seamus Abshere wrote:
> I've had an `INSERT INTO x SELECT FROM [...]` query running for more
> then 2 days.
> 
> Is there a way to see how big x has gotten? Even a very rough estimate
> (off by a gigabyte) would be fine.

On linux:

Run ps -fu postgres (or SELECT pid, query FROM pg_stat_activity) and
look at: ls -l /proc/PID/fd

writing to XX.22 means it's written ~22GB.

You can also SELECT relfilenode FROM pg_class WHERE oid='x'::regclass (or
relname='x').

Or try using strace (but beware I've seen its interruption to syscalls change
the behavior of the program being straced).

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] core system is getting unresponsive because over 300 cpu load

2017-10-10 Thread Justin Pryzby
On Tue, Oct 10, 2017 at 01:40:07PM -0700, pinker wrote:
> Hi to all!
> 
> We've got problem with a very serious repetitive incident on our core
> system. Namely, cpu load spikes to 300-400 and the whole db becomes
> unresponsive. From db point of view nothing special is happening, memory
> looks fine, disks io's are ok and the only problem is huge cpu load. Kernel
> parameters that are increasing with load are always the same:

> * disabled transparent huge pages (they were set before unfortunately to
> 'always')

Did you also try disabling KSM ?
echo 2 |sudo tee /sys/kernel/mm/ksm/run

I believe for us that was affecting a postgres VM(QEMU/KVM) and maybe not
postgres itself.  Worth a try ?

https://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any known issues Pg 9.3 on Ubuntu Xenial kernel 4.4.0?

2017-09-20 Thread Justin Pryzby
On Wed, Sep 20, 2017 at 01:14:14PM -0500, Jerry Sievers wrote:
> Be curious to hear of issues encountered and particular to eager to know
> if disabling any kernel 4.x features helped.

What was the old kernel/OS ?  wheezy / kernel 3.x ?

Perhaps try these ideas ?
https://www.postgresql.org/message-id/20170718180152.GE17566%40telsasoft.com

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Canceling authentication due to timeout" with idle transaction and reindex

2017-09-15 Thread Justin Pryzby
On Fri, Sep 15, 2017 at 06:49:06AM -0500, Ron Johnson wrote:
> On 09/15/2017 06:34 AM, Justin Pryzby wrote:
> [snip]
> >But you might consider: 1) looping around tables/indices rather than "REINDEX
> >DATABASE", and then setting a statement_timeout=9s for each REINDEX 
> >statement;
> 
> Is there a way to do that within psql?  (Doing it from bash is trivial, but
> I'd rather do it from SQL.)

Not that I know, but it wouldn't help me, since our script also calls pg_repack
(for indices on system and some other tables), and also has logic to handle
differently historic partition tables.

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "Canceling authentication due to timeout" with idle transaction and reindex

2017-09-15 Thread Justin Pryzby
On Fri, Sep 15, 2017 at 12:25:58PM +0200, s19n wrote:

> 1. with "\set AUTOCOMMIT off" in my psqlrc, issue a
> "SELECT * FROM pg_stat_activity;" and leave it there
This probably obtains a read lock on some shared, system tables/indices..

> 2. in a different connection, issue a database REINDEX (of any database
> different from 'postgres')
.. and this waits to get an EXCLUSIVE lock on those tables/inds, but has to
wait on the read lock;

> * Any further attempt to create new connections to the server, to any
> database, does not succeed and leads to a "FATAL: canceling authentication
> due to timeout" in the server logs.
.. and logins are apparently waiting on the reindex (itself waiting to get
exclusive) lock.

You can look at the locks (granted vs waiting) in SELECT * FROM pg_locks

But you might consider: 1) looping around tables/indices rather than "REINDEX
DATABASE", and then setting a statement_timeout=9s for each REINDEX statement;
and/or, 2) use pg_repack, but I don't think it handles system tables.

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Count column with name 'count' returns multiple rows. Why?

2017-08-18 Thread Justin Pryzby
On Fri, Aug 18, 2017 at 10:47:37PM +0200, Peter J. Holzer wrote:
> On 2017-08-18 06:37:15 -0500, Justin Pryzby wrote:
> > On Fri, Aug 18, 2017 at 01:01:45PM +0200, Rob Audenaerde wrote:
> > > I don't understand why this query:
> > > 
> > >select count(base.*) from mytable base;
> > > 
> > > does return multiple rows.
> > > 
> > >select count(1) from mytable base;
> > > 
> > > returns the proper count.
> > > 
> > > There is a column with the name 'count'.
> > > 
> > > Can anyone please explain this behaviour?
> > 
> > https://www.postgresql.org/docs/9.6/static/sql-expressions.html#SQL-EXPRESSIONS-FUNCTION-CALLS
> > https://www.postgresql.org/docs/9.6/static/rowtypes.html#ROWTYPES-USAGE
> 
> Maybe I overlooked it, but I don't see anything in those pages which
> explains why «count» is parsed as a column name in the first example and
> as a function name in the second.
> 
> Nor do I see what «count(base.*)» is supposed to mean. It seems to be
> completely equivalent to just writing «count», but the part in
> parentheses is not ignored: It has to be either the table name or the
> table name followed by «.*». Everything else I tried either led to a
> syntax error or to «count» being recognized as a function. So apparently
> columnname open-parenthesis tablename closed-parenthesis is a specific
> syntactic construct, but I can't find it documented anywhere.

| Another special syntactical behavior associated with composite values is that
|we can use functional notation for extracting a field of a composite value. The
|simple way to explain this is that the notations field(table) and table.field
|are interchangeable. For example, these queries are equivalent:


| Tip: Because of this behavior, it's unwise to give a function that takes a
|single composite-type argument the same name as any of the fields of that
|composite type. If there is ambiguity, the field-name interpretation will be
|preferred, so that such a function could not be called without tricks. One way
|to force the function interpretation is to schema-qualify the function name,
|that is, write schema.func(compositevalue).

pryzbyj=# select base.count from s91 base;
 count 
---
 1
 2
 3
(3 rows)

pryzbyj=# select pg_catalog.count(base.*) from s91 base;
count | 3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Count column with name 'count' returns multiple rows. Why?

2017-08-18 Thread Justin Pryzby
On Fri, Aug 18, 2017 at 01:01:45PM +0200, Rob Audenaerde wrote:
> I don't understand why this query:
> 
>select count(base.*) from mytable base;
> 
> does return multiple rows.
> 
>select count(1) from mytable base;
> 
> returns the proper count.
> 
> There is a column with the name 'count'.
> 
> Can anyone please explain this behaviour?

https://www.postgresql.org/docs/9.6/static/sql-expressions.html#SQL-EXPRESSIONS-FUNCTION-CALLS
https://www.postgresql.org/docs/9.6/static/rowtypes.html#ROWTYPES-USAGE


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] partitioning question

2017-07-30 Thread Justin Pryzby
On Mon, Jul 31, 2017 at 10:25:54AM +1000, Alex Samad wrote:
> I note that you link to P10 and I am currently looking at 9.6.  The changes
> do look nice for partitioning for p10.

Yes sorry, pg10 is beta - avoid using it except for testing purposes.

> I will add currently we don't delete anything, we will keep adding to it.
> 
> Also I am thinking my insert trigger becomes a lot smaller and easier if I
> leave it at yearly.
Note: the trigger function can either be a static function updated monthly (to
handle the next month), preferably with the most recent months tested first (so
a typical newly-inserted rows only goes through one if/case test).

Alternately, the trigger function can dynamically compute the table into which
to insert using plpgsql "format()" similar to here:
https://www.postgresql.org/docs/9.1/static/plpgsql-statements.html#PLPGSQL-QUOTE-LITERAL-EXAMPLE

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] partitioning question

2017-07-30 Thread Justin Pryzby
On Mon, Jul 31, 2017 at 09:15:29AM +1000, Alex Samad wrote:
> Hi
> 
> I was about to partition a large (?) approx 3T of data 2B rows into
> partition tables but broken up into MM ...
> 
> Now I have been reading about limiting the number of partitions otherwise
> it could slow down the parser.
> 
> My reasoning for limiting to MM was that most of the request would be
> monthly based.
> 
> Should I be making the partitioning based on  instead and have lots
> more indexs.
> 
> If I have an index on the timestamp field will it help limiting to MM ?

The major advantages of partitions are enumerated here:
https://www.postgresql.org/docs/10/static/ddl-partitioning.html#ddl-partitioning-overview

For your case, it might be that seq scans of an entire "monthly" partition turn
out to be very advantageous, compared with index scan (or seq scan of entire
3TB data).

Also DROPing the oldest partition every month is commonly very much more
efficient than DELETEing it..

There are warnings like these:

|All constraints on all partitions of the master table are examined during
|constraint exclusion, so large numbers of partitions are likely to increase
|query planning time considerably. Partitioning using these techniques will work
|well with up to perhaps a hundred partitions; don't try to use many thousands
|of partitions.

Unless you have 100s of years of data I don't think it would be a problem.

For us, having hundreds of partitions hasn't been an issue (planning time is
insignificant for our analytic report queries).  But there's an overhead to
partitions and at some point the cost becomes significant.  (Actually, I think
one cost which *did* hit us, while experimenting with *daily* partition
granularity of every table, was probably due to very large pg_statistics and
pg_attributes tables, which no longer fit in buffer cache).

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning (constraint exclusion involving joins)

2017-07-25 Thread Justin Pryzby
On Tue, Jul 25, 2017 at 06:21:43PM +0530, Krithika Venkatesh wrote:
> I have a table that is partitioned on a numeric column (ID).
> 
> Partitioning works when I query the table with no joins.
> 
> SELECT * FROM TABLE A a WHERE ID IN (SELECT ID FROM TABLE B b WHERE
> CREATED_TS = CURRENT_TIMESTAMP)
> 
> Partitioning doesn't work when I do join.
> 
> SELECT A.* FROM TABLE A a INNER JOIN TABLE B b ON  a.ID = b.ID.

I think you mean "constraint exclusion doesn't work when yo do a join",

which is because it only works on simple values compiled before the planner
gets to see them:

main=# explain SELECT COUNT(1) FROM eric_enodeb_metrics WHERE start_time>now(); 
-- -'999 minutes'::interval;
   QUERY PLAN   
 
-
 Aggregate  (cost=62.44..62.45 rows=1 width=8)
   ->  Append  (cost=0.00..62.40 rows=14 width=0)
 ->  Seq Scan on eric_enodeb_metrics  (cost=0.00..0.00 rows=1 width=0)
   Filter: (start_time > now())
 ->  Index Only Scan using eric_enodeb_201607_idx on eric_enodeb_201607 
 (cost=0.42..4.44 rows=1 width=0)
   Index Cond: (start_time > now())
 ->  Index Only Scan using eric_enodeb_201608_idx on eric_enodeb_201608 
 (cost=0.42..4.44 rows=1 width=0)
   Index Cond: (start_time > now())
 ->  Index Only Scan using eric_enodeb_201609_idx on eric_enodeb_201609 
 (cost=0.42..4.44 rows=1 width=0)
   Index Cond: (start_time > now())

https://www.postgresql.org/docs/current/static/ddl-partitioning.html
|The following caveats apply to constraint exclusion:
| Constraint exclusion only works when the query's WHERE clause contains
|constants (or externally supplied parameters). For example, a comparison
|against a non-immutable function such as CURRENT_TIMESTAMP cannot be optimized,
|since the planner cannot know which partition the function value might fall
|into at run time.
[..]


.. and see an early mail on its implementation, here:
https://www.postgresql.org/message-id/1121251997.3970.237.camel@localhost.localdomain

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] huge RAM use in multi-command ALTER of table heirarchy

2017-07-06 Thread Justin Pryzby
I've seen this before while doing SET STATISTICS on a larger number of columns
using xargs, but just came up while doing ADD of a large number of columns.
Seems to be roughly linear in number of children but superlinear WRT columns.
I think having to do with catalog update / cache invalidation with many
ALTERs*children*columns?

32 cols and 2 children=> 12MB
256 cols and 11 children  => 74MB
256 cols and 111 children => 582MB
512 cols and 11 children  => 229MB

(in our "huge" case, there were ~1600 columns and maybe even more children)

I was testing with this command
PGHOST=/tmp PGPORT= sh -ec 'for maxcols in 512 ; do 
~/src/postgresql.install/bin/postgres -D ~/src/postgres.dat -c port= & 
sleep 4; cols=$(for d in `seq 1 $maxcols`; do echo "ADD c$d int,"; done 
|xargs); PGOPTIONS="-c client_min_messages=warning" psql postgres -qc "DROP 
TABLE t CASCADE" || [ $? -eq 1 ]; psql postgres -qc "CREATE TABLE t()"; for c 
in `seq 1 11`; do psql postgres -qc "CREATE TABLE c$c() INHERITS(t)"; done; for 
d in `seq 1 $maxcols`; do echo "ALTER TABLE t ADD c$d int;"; done 
|PGOPTIONS="-c client_min_messages=DEBUG3 -c log_statement_stats=on" psql 
postgres -c "ALTER TABLE t ${cols%,}" 2>/tmp/pg.err2; 
~/src/postgresql.install/bin/pg_ctl -swD ~/src/postgres.dat stop; done'

..and log_statment_stats with a variation on the getrusage patch here 
https://www.postgresql.org/message-id/20170615145824.GC15684%40telsasoft.com

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] inheritence children with integer columns of differing width

2017-06-19 Thread Justin Pryzby
I wondered if anyone had considered allowing inheritence children to have
different column types than the parent (and each other).

I'm thinking of the trivial (?) case of smallint/int/bigint.  Reason is that
when we load data which exceeds the theshold for the current data type we have
to promote the column, rewriting the table, which can take a very long time,
and use very large amount of space.  We've had to start uninheriting all but
the most recent children before ALTERing to make it more reasonable (and then
separately ALTER+reinherit each child) - it's especially painful when a key
column grows beyond "int", and many tables need to be altered all at once..

It seems to me this is what would happen if one were to UNION ALL the children,
although I see the plan differs with differering type:

pryzbyj=# create table ii(i bigint);
pryzbyj=# create table i(i int);
pryzbyj=# explain SELECT * FROM ii UNION ALL SELECT * FROM i;
Append  (cost=0.00..110.80 rows=4540 width=6)
  ->  Seq Scan on ii  (cost=0.00..31.40 rows=2140 width=8)
  ->  Subquery Scan on "*SELECT* 2"  (cost=0.00..58.00 rows=2400 width=4)
->  Seq Scan on i  (cost=0.00..34.00 rows=2400 width=4)

pryzbyj=# alter table i ALTER i TYPE bigint;
ALTER TABLE
pryzbyj=# explain SELECT * FROM ii UNION ALL SELECT * FROM i;
Append  (cost=0.00..62.80 rows=4280 width=8)
  ->  Seq Scan on ii  (cost=0.00..31.40 rows=2140 width=8)
  ->  Seq Scan on i  (cost=0.00..31.40 rows=2140 width=8)

If it were allowed for children to have int columns with differing widths, then
to promote int column, we would uninherit the historic children, ALTER the
parent (and most recent tables), and then reinherit the children (unless ALTER
on its own avoided rewriting tables in such a case).

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] workaround for column cross-correlation

2017-06-12 Thread Justin Pryzby
On Mon, Jun 12, 2017 at 08:46:57PM -0700, Jeff Janes wrote:
> On Mon, Jun 12, 2017 at 8:17 PM, Justin Pryzby <pry...@telsasoft.com> wrote:
> 
> > I know PG 10 will have support "CREATE STATISTICS.." for this..
> >
> > ..but I wondered if there's a recommended workaround in earlier versions ?
> 
> Not without seeing the query

With my workaround:
ts=# explain ANALYZE SELECT t1.sect_id, t1.start_time as period, sum (1) FROM 
enodeb_ncell_view t1, enodeb_ncell_view inc
WHERE ((t1.start_time >= '2017-04-30 00:00:00' AND t1.start_time < '2017-05-01 
00:00:00'))
AND ((inc.start_time >= '2017-04-30 00:00:00' AND inc.start_time < '2017-05-01 
00:00:00'))
AND t1.start_time = inc.start_time
AND ROW((t1.sect_id,t1.neigh_sect_id))= ROW((inc.neigh_sect_id,inc.sect_id))
GROUP BY t1.sect_id, period;
HashAggregate  (cost=63149.59..63371.74 rows=22215 width=10) (actual 
time=80092.652..80097.521 rows=22464 loops=1)
...

Without:
ts=# explain ANALYZE SELECT t1.sect_id, t1.start_time as period, sum (1) FROM 
enodeb_ncell_view t1, enodeb_ncell_view inc
WHERE ((t1.start_time >= '2017-04-30 00:00:00' AND t1.start_time < '2017-05-01 
00:00:00'))
AND ((inc.start_time >= '2017-04-30 00:00:00' AND inc.start_time < '2017-05-01 
00:00:00'))
AND t1.start_time = inc.start_time
AND t1.sect_id=inc.neigh_sect_id AND t1.neigh_sect_id=inc.sect_id
GROUP BY t1.sect_id, period;
GroupAggregate  (cost=57847.32..62265.54 rows=402 width=10) (actual 
time=694.186..952.744 rows=22464 loops=1)
...

This is a small inner subquery of a larger report - sum(1) is a placeholder for
other aggregates I've stripped out.

> > 2) memory explosion in hash join (due to poor estimate?) caused OOM.
> 
> As far as I know, the only way a hash join should do this is if the join
> includes a huge number of rows with exactly the same 32 bit hash codes.
> Otherwise, it should spill to disk without causing OOM.  Hash aggregates,
> on the other hand, are a different matter.

That's almost certainy what I meant.

   ->  Subquery Scan on data_rx_enb  (cost=3409585.76..3422861.74 rows=663799 
width=20) (actual time=510475.987..512069.064 rows=2169821 loops=1)
 ->  HashAggregate  (cost=3409585.76..3416223.75 rows=663799 width=16) 
(actual time=510475.984..511650.337 rows=2169821 loops=1)
   Group Key: eric_enodeb_cell_metrics_1.site_id, 
eric_enodeb_cell_metrics_1.start_time

Thanks,
Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] workaround for column cross-correlation

2017-06-12 Thread Justin Pryzby
I know PG 10 will have support "CREATE STATISTICS.." for this..

..but I wondered if there's a recommended workaround in earlier versions ?

We had two issues:

1) improper estimate caused poor plans (nested loops due to ::date, GROUP BY,
cross-column stats, and maybe more).
2) memory explosion in hash join (due to poor estimate?) caused OOM.

I tried ROW/ARRAY comparison for the correlated columns which seems to do what
I want, acting as a single comparison, rather than two, independent
comparisons.

->  Merge Join  (cost=57811.81..62290.04 rows=114606 width=58) (actual 
time=5174.556..5698.323 rows=204672 loops=1)
   Merge Cond: ((eric_enodeb_cellrelation_metrics.start_time = 
eric_enodeb_cellrelation_metrics_1.start_time) AND 
((ROW(eric_enodeb_cellrelation_metrics.sect_id, 
eric_enodeb_cellrelation_metrics.neigh_sect_id)) = 
(ROW(eric_enodeb_cellrelation_metrics_1.neigh_sect_id, 
eric_enodeb_cellrelation_metrics_1.sect_id))))

Thanks,
Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Deadlock with single update statement?

2017-06-10 Thread Justin Pryzby
On Sat, Jun 10, 2017 at 03:16:26PM -0400, Tom Lane wrote:
> Rob Nikander <rob.nikan...@gmail.com> writes:
> >> On Jun 10, 2017, at 10:34 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> >> […] but it'd be better to adjust the query to ensure a deterministic
> >> update order.
> 
> > Thank you for the answer. Since `update` has no `order by` clause, I’m 
> > guessing there’s no way to do this with just the `update` statement, and 
> > that I should use `select … order by … for update’ for this.
> 
> Yeah, that's one easy answer.  You can probably force it with a sub-select
> in the UPDATE, as well, but it will take more thought.

I have a question about this ... I'm already using SELECT .. FOR UDPATE,
prefixed with "SET synchronize_seqscans=off".

..using "ORDER BY ctid" since our SELECT statements for a given table may
differ (previously I'd tried using "ORDER BY 1,2,...n" for each key column).

And yet I still see deadlocks.

Example:

session_id|593be2ff.68f7
log_time|2017-06-10 01:16:37.786-11
pid|26871
detail|Process 26871 waits for ShareLock on transaction 13693505; blocked by 
process 26646.
Process 26646 waits for ShareLock on transaction 13693504; blocked by process 
26871.
Process 26871: SELECT db_column_name,table_name FROM 
huawei_m2000_counter_details ORDER BY ctid FOR UPDATE
Process 26646: SELECT db_column_name,table_name FROM 
huawei_m2000_counter_details ORDER BY ctid FOR UPDATE
session_line|2923
message|deadlock detected

session_id|593be2ff.68f7
log_time|2017-06-10 01:16:27.633-11
pid|26871
detail|
session_line|2917
message|statement: BEGIN

session_id|593be2ff.68f7
log_time|2017-06-10 01:16:27.638-11
pid|26871
detail|
session_line|2918
message|statement: SET synchronize_seqscans=off

session_id|593be2ff.68f7
log_time|2017-06-10 01:16:27.64-11
pid|26871
detail|
session_line|2919
message|statement: SELECT db_column_name,table_name FROM 
huawei_m2000_counter_details ORDER BY ctid FOR UPDATE

session_id|593be2ff.68f7
log_time|2017-06-10 01:16:28.994-11
pid|26871
detail|Process holding the lock: 29467. Wait queue: 26871, 26646.
session_line|2920
message|process 26871 still waiting for ShareLock on transaction 13693494 after 
1000.070 ms

session_id|593be2ff.68f7
log_time|2017-06-10 01:16:36.786-11
pid|26871
detail|
session_line|2921
message|process 26871 acquired ShareLock on transaction 13693494 after 8791.608 
ms

session_id|593be2ff.68f7
log_time|2017-06-10 01:16:37.786-11
pid|26871
detail|Process holding the lock: 26646. Wait queue: .
session_line|2922
message|process 26871 detected deadlock while waiting for ShareLock on 
transaction 13693505 after 1000.080 ms

2nd process:

session_id|593be2fd.6816
log_time|2017-06-10 01:16:28.947-11
pid|26646
detail|
session_line|2301
message|statement: BEGIN

session_id|593be2fd.6816
log_time|2017-06-10 01:16:28.949-11
pid|26646
detail|
session_line|2302
message|statement: SET synchronize_seqscans=off

session_id|593be2fd.6816
log_time|2017-06-10 01:16:28.949-11
pid|26646
detail|
session_line|2303
message|statement: SELECT db_column_name,table_name FROM 
huawei_m2000_counter_details ORDER BY ctid FOR UPDATE

session_id|593be2fd.6816
log_time|2017-06-10 01:16:29.956-11
pid|26646
detail|Process holding the lock: 29467. Wait queue: 26871, 26646.
session_line|2304
message|process 26646 still waiting for ShareLock on transaction 13693494 after 
1000.076 ms

session_id|593be2fd.6816
log_time|2017-06-10 01:16:36.786-11
pid|26646
detail|
session_line|2305
message|process 26646 acquired ShareLock on transaction 13693494 after 7829.560 
ms

session_id|593be2fd.6816
log_time|2017-06-10 01:16:37.833-11
pid|26646
detail|
session_line|2306
message|statement: RESET synchronize_seqscans

Thanks in advance for any clue or insight.

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Current best practice for maximum shared_buffers settings on big hardware?

2017-05-24 Thread Justin Pryzby
On Wed, May 24, 2017 at 08:24:15AM -0400, Bill Moran wrote:
> ... I tried allocating 64G to shared buffers and we had a bunch of problems
> with inconsistent performance, including "stall" periods where the database
> would stop responding for 2 or 3 seconds.  After trying all sorts of tuning
> options that didn't help, the problem finally went away after reducing
> shared_buffers to 32G. I speculated, at the time, that the shared buffer code
> hit performance issues managing that much memory, but I never had the
> opportunity to really follow up on it.

I think you were hitting an issue related to "kernel shared memory" and maybe
"transparent huge pages".

I was able to work around similar issues with ~32GB allocations to QEMU/QEMU
running on something like kernel 3.13.  I didn't spend time to narrow down the
problem, and I don't know if the behavior is better with recent kernel.

/sys/kernel/mm/ksm/run=2
... and maybe also:
/sys/kernel/mm/transparent_hugepage/defrag=madvise
/sys/kernel/mm/ksm/merge_across_nodes=0 

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help: Installing 9.6 breaks local connections to 9.2 on Centos 6.9

2017-05-15 Thread Justin Pryzby
On Mon, May 15, 2017 at 12:55:48PM -0700, Ken Tanzer wrote:
> Hi.  On a Centos 6.9 server (in the cloud with Rackspace), I'm wanting to
> install PGDG 9.6 alongside the already-running 9.2.  After installing the
> 9.6 packages (and even before doing an initdb), I am no
> longer able to make a local connection to the 9.2 server.  Instead I get
> the message:

See eg.
https://www.postgresql.org/message-id/21044.1326496...@sss.pgh.pa.us
https://www.postgresql.org/message-id/0a21bc93-7b9c-476e-aaf4-0ff71708e...@elevated-dev.com

I'm guessing you upgraded the client libraries, which probably change the
(default) socket path.

Your options are to specify path to the socket (maybe in /tmp for running
PG92?), change to TCP connection, or specify server option
unix_socket_directories.

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Partitioning and Table Inheritance

2017-05-12 Thread Justin Pryzby
On Mon, May 08, 2017 at 10:12:18AM -0700, Paul A Jungwirth wrote:
> I'm working on a problem where partitioning seems to be the right
> approach, but we would need a lot of partitions (say 10k or 100k).
> Everywhere I read that after ~100 child tables you experience
> problems. I have a few questions about that:

We use partitioning, previously one child per month (with history of 1-6
years); I tried using one child per day, and caused issues.

For us, planning time is pretty unimportant (~1sec would be acceptable 99% of
the time) but I recall seeing even more than that.  I changed to using daily
granularity for only our largest tables, which seems to be working fine for the
last ~9months.  So the issue isn't just "number of children" but "total number
of tables".  I believe the problem may have been due to large
pg_statistic/pg_attribute and similar tables taking more than a few 100MBs, and
potentially no longer fitting in buffer cache.

> 3. Is it true that query planning time should scale linearly as I add
> more child tables?

I believe it's understood to be super-linear:
https://www.postgresql.org/message-id/26761.1483989025%40sss.pgh.pa.us
https://www.postgresql.org/message-id/4188.1298960419%40sss.pgh.pa.us

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PG96 pg_restore connecting to PG95 causes ERROR: unrecognized configuration parameter "idle_in_transaction_session_timeout"

2017-05-05 Thread Justin Pryzby
When doing a dump+restore upgrade, it's commonly recommended to use the later
version of pg_restore:

https://www.postgresql.org/docs/current/static/upgrading.html
"It is recommended that you use the pg_dump and pg_dumpall programs from the
newer version of PostgreSQL, to take advantage of enhancements that might have
been made in these programs. Current releases of the dump programs can read
data from any server version back to 7.0."

In the immediate case, I was loading data from PG95 dumps into PG95 server (not
an upgrade), using P96 pg_restore, and getting:

ERROR:  unrecognized configuration parameter 
"idle_in_transaction_session_timeout"

I can't see anybody has raised that issue before.  Should pg_restore check the
remote server version and avoid sending commands not expected to be understood?

(Yes, I know and use pg_upgrade, however I'm currenting migrating a DB between
servers and this procedure will allow doing so with ~30min
downtime...pg_upgrade to 9.6 will be done afterwards, which is why PG96
pg_upgrade is installed).

Thanks,
Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to add columns to view with dependencies

2017-04-16 Thread Justin Pryzby
On Sun, Apr 16, 2017 at 08:02:54PM -0700, Guyren Howe wrote:
> Seems like a simple question, but I’ve never found a good answer to this and 
> similar issues.
> 
> I would think it was safe to let me add columns to a view on which other 
> views depend, but Postgres won’t let me.
> 
> I can imagine ways of sort-of dealing with this. I might maintain a SQL file 
> with views to create in a suitable order, Then I could drop all views, edit 
> the definition of one, then run the file, but this is awfully tedious.
> 
> What is best practice in this situation?

If you're not re-ordering existing columns, you can use CREATE OR REPLACE VIEW

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SELECT x'00000000F'::int leading zeros causes "integer out of range"

2017-02-24 Thread Justin Pryzby
Is this expected behavior ?

This works:

ts=# SELECT x'000F'::int;
int4|15

.. but an additional leading zero causes it to fail:

ts=# SELECT x'F'::int;
ERROR:  22003: integer out of range
LOCATION:  bittoint4, varbit.c:1575


|/* Check that the bit string is not too long */
|if (VARBITLEN(arg) > sizeof(result) * BITS_PER_BYTE)
|   ereport(ERROR,
|   (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
|errmsg("integer out of range")));

Justin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

2016-12-04 Thread Justin Pryzby
Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).

I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 columns (we have some tables
which are at the 1600 column limit, and have previously worked around that
limit using arrays or multiple tables).

I tried to work around the upsert problem by using pygresql inline=True
(instead of default PREPAREd statements) but both have the same issue.

I created a test script which demonstrates the problem (attached).

It seems to me that there's currently no way to "upsert" such a wide table?

I see:
./src/include/access/htup_details.h:#define MaxTupleAttributeNumber 1664
/* 8 * 208 */

./src/backend/parser/parse_node.c-  /*
./src/backend/parser/parse_node.c-   * Check that we did not produce too 
many resnos; at the very least we
./src/backend/parser/parse_node.c-   * cannot allow more than 2^16, since 
that would exceed the range of a
./src/backend/parser/parse_node.c:   * AttrNumber. It seems safest to use 
MaxTupleAttributeNumber.
./src/backend/parser/parse_node.c-   */
./src/backend/parser/parse_node.c:  if (pstate->p_next_resno - 1 > 
MaxTupleAttributeNumber)
./src/backend/parser/parse_node.c-  ereport(ERROR,
./src/backend/parser/parse_node.c-  
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
./src/backend/parser/parse_node.c:   errmsg("target 
lists can have at most %d entries",
./src/backend/parser/parse_node.c:  
MaxTupleAttributeNumber)));

Thanks in advance for any suggestions.

Justin
#! /bin/sh
set -e

n=831
t=wide_upsert

psql -c "DROP TABLE IF EXISTS $t"

cols='id int'
vals='0'
sets='id=0'
for a in `seq -w 0 $n`
do
c="c$a int"
cols="$cols, c$a int"

vals="$vals, \$1"
sets="$sets, c$a=\$1"
done

echo $cols
psql -c "CREATE TABLE $t ($cols, PRIMARY KEY (id))"

set -x
psql -c "PREPARE x AS INSERT INTO $t VALUES ($vals) ON CONFLICT (id) DO UPDATE 
SET $sets; EXECUTE x(0)"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] INSERT ON CONFLICT of "wide" table: target lists can have at most 1664 entries

2016-12-04 Thread Justin Pryzby
Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).

I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 columns (we have some tables
which are at the 1600 column limit, and have previously worked around that
limit using arrays or multiple tables).

I tried to work around the upsert problem by using pygresql inline=True
(instead of default PREPAREd statements) but both have the same issue.

I created a test script which demonstrates the problem (attached).

It seems to me that there's currently no way to "upsert" such a wide table?

I see:
./src/include/access/htup_details.h:#define MaxTupleAttributeNumber 1664
/* 8 * 208 */

./src/backend/parser/parse_node.c-  /*
./src/backend/parser/parse_node.c-   * Check that we did not produce too 
many resnos; at the very least we
./src/backend/parser/parse_node.c-   * cannot allow more than 2^16, since 
that would exceed the range of a
./src/backend/parser/parse_node.c:   * AttrNumber. It seems safest to use 
MaxTupleAttributeNumber.
./src/backend/parser/parse_node.c-   */
./src/backend/parser/parse_node.c:  if (pstate->p_next_resno - 1 > 
MaxTupleAttributeNumber)
./src/backend/parser/parse_node.c-  ereport(ERROR,
./src/backend/parser/parse_node.c-  
(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
./src/backend/parser/parse_node.c:   errmsg("target 
lists can have at most %d entries",
./src/backend/parser/parse_node.c:  
MaxTupleAttributeNumber)));

Thanks in advance for any suggestions.

Justin
#! /bin/sh
set -e

n=831
t=wide_upsert

psql -c "DROP TABLE IF EXISTS $t"

cols='id int'
vals='0'
sets='id=0'
for a in `seq -w 0 $n`
do
c="c$a int"
cols="$cols, c$a int"

vals="$vals, \$1"
sets="$sets, c$a=\$1"
done

echo $cols
psql -c "CREATE TABLE $t ($cols, PRIMARY KEY (id))"

set -x
psql -c "PREPARE x AS INSERT INTO $t VALUES ($vals) ON CONFLICT (id) DO UPDATE 
SET $sets; EXECUTE x(0)"

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Concerned to discover DOMAINs can be the same as built in types with no escaping

2016-06-23 Thread Justin Dearing
Hello,

We happen to have in our schema the following domain.

CREATE DOMAIN public.name varchar(50) NOT NULL;

This was done before me. We assumed this was used in many tables in our
app. Then I wrote a function with a return clause like the following:

RETURNS (
id INT,
name name,
other_names name ARRAY
)

This worked fine until CI tried to build this on a postgres 9.3 box (we are
mostly 9.1, but are fixing that). Then it failed.

So then I discovered that there is a built in type called pg_catalog.name
as well as my public.name. Followed by the discovery that you can't have
arrays of domains. This query showed two rows

SELECT typnamespace, typname
FROM pg_catalog.pg_type typ
WHERE typname = 'name';

Then there was wailing and gnashing of teeth, and I made everything
explicitly varchar, and everything was all good, except I have to fix unit
tests. Oh and nothing is actually using our domain, as demonstrated by this
query:

SELECT attrelid::regclass AS table_name,
  attname, atttypid::REGTYPE
  FROM pg_catalog.pg_attribute
  WHERE atttypid::REGTYPE IN ('name', 'public.name')
  ORDER BY atttypid DESC, attrelid::regclass


Based on this, and some consultations with friends who know more about
postgres than I, I'd like to propose that domains not be allowed to be the
same name as built in types or at the very least give a warning. The fact
that I have to quote keywords, but not even need to quote built in types is
bothersome. Here are examples of queries and behaviors I expect


CREATE DOMAIN "INTO" char(5); -- Does work. Should work without a warning.
The error you get for doing it unquoted is sufficient IMHO
CREATE DOMAIN int CHAR(50); -- Does work. Id prefer it not to work.
Alternatively it could work but emit a warning.
CREATE DOMAIN public.int CHAR(50); -- Does work. I could see the argument
for it working, but would prefer it didn't work. Should still emit a
warning its overriding a base

Since I'm returning to postgres after close to a decade, I figured I'd ask
here for feedback before posting to the hackers list.

Regards,

Justin Dearing


Re: [GENERAL] Update from select

2013-05-13 Thread Justin Tocci
em stands for easy money

update tbl1
set col3=em.col3,col4=em.col4,col5=em.col5
from
(select col3, col4,col5 from tbl2 where col1=criteria) em

Regards,

Justin Tocci
Programmer
www.workflowproducts.com
7813 Harwood Road
North Richland Hills, TX 76180
phone 817-503-9545
skype justintocci

On May 13, 2013, at 3:23 PM, Bret Stern bret_st...@machinemanagement.com 
wrote:

 PG 8.4
 
 Having trouble putting together an update query to update
 multiple columns in tbl1 from columns in tbl2. 
 
 update tbl1
 set col3,col4,col5
 from
 (select col3, col4,col5 from tbl2 where col1=criteria)
 
 
 
 Can someone add to the Postgres Docs (shown below) to help me with this.
 
 UPDATE employees SET sales_count = sales_count + 1 WHERE id =
  (SELECT sales_person FROM accounts WHERE name = 'Acme Corporation');
 
 
 Many thanks
 
 Bret Stern
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Columns defined not matching pg_attibute

2012-12-12 Thread Justin Arnold
 | raic_tickler_fmt | raic_mds_locking | raic_st_auth |
raic_allres_cc_rsm | raic_care_plan_prompt | raic_use_mult_ltc_dept |
raic_dept   |  raic_dept_fac_id   |  raic_dept_fed_num  |
raic_dept_fac_name  |  raic_dept_provnum  | raic_dept_natprovid |
raic_fed_rate_table | raic_fed_calc_type | raic_state_rug_code |
raic_state_calc_type
---+--++--+--+-++---++--+---+--++--+---+--+--+--+--+--+--+--+--+--+--+--+--+--++---++---+-+-+-+-+-+-++-+--

which is missing the last 2 columns defined by pg_attribute and \d.
Any ideas on what I can look at or do to correct this problem (if it can be
corrected)?

We are running postgres 8.3.4 on 64 bit Red Hat kernel
release 2.6.18-164.el5

Thanks for the help!

- Justin


[GENERAL] Updating pg_attribute to widen column

2012-11-29 Thread Justin Julicher
Psql Version: 8.4

Hi,



We need to widen a column on a table with millions of rows and the only way
to do this currently is to migrate the data from one column to another with
a script and trigger.



I know how to do this via an update to pg_attribute which would incur the
table scan penalty but I have a number of questions is –


   - Does postgres pick up this change straight away?
   - Are there any caveats to my first question?


thanks,

Justin


Re: [GENERAL] Updating pg_attribute to widen column

2012-11-29 Thread Justin Julicher
Hi Greg,

First off, thanks for your reply.

I had actually just read your blog before writing this.

I should have been more clear in my first post.

If you use ALTER TABLE it will check every row in the table to make sure
the column doesn't exceed the constraint (in 8.4 - I know this has been
updated in 9.x)

As I am trying to update a table with hundreds of millions of rows and the
only way to do this efficiently (in an online database with  4 9's
availability) is with a pg_attribute update. Previously we have done this
via a script that runs for about a week on the database to migrate to
another column, but this also involves code changes and lots and lots of
testing.

So my question is - does postgres take an update to pg_attribute instantly
and in a reliable manner?

thanks,

Justin.




On Fri, Nov 30, 2012 at 4:15 PM, Greg Sabino Mullane g...@turnstep.comwrote:


 -BEGIN PGP SIGNED MESSAGE-
 Hash: RIPEMD160

 Justin Julicher asked:
  We need to widen a column on a table with millions of rows and the only
 way
  to do this currently is to migrate the data from one column to another
 with
  a script and trigger.

 Not the only way - the canonical way is to simply use ALTER TABLE.

  I know how to do this via an update to pg_attribute which would incur the
  table scan penalty but I have a number of questions

 No, there is no table scan penalty.

  - Does postgres pick up this change straight away?

 Not sure exactly what you mean. Certainly, new inserts will respect the
 change.

  Are there any caveats to my first question?

 Yes. A direct pg_attribute change should be your last resort. Do
 an ALTER TABLE if you can. If you must do it via pg_attribute,
 test it very well first, and make sure to look at pg_depend. See:

 http://blog.endpoint.com/2012/11/postgres-alter-column-problems-and.html

 - --
 Greg Sabino Mullane g...@endpoint.com  g...@turnstep.com
 End Point Corporation 610-983-9073
 PGP Key: 0x14964AC8 201211300113
 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
 -BEGIN PGP SIGNATURE-

 iEYEAREDAAYFAlC4TtMACgkQvJuQZxSWSsiVDwCcCFYggG7mMf45nMIfoXHBGnMq
 TMkAn23VUHK0z/SshzrRACW0+dn5wqPv
 =CAQa
 -END PGP SIGNATURE-




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] Hash index not being updated

2011-10-04 Thread Justin Naifeh
In Postgres 8.4, I have a table called java_types with two columns, 
package_name and class_name. There is another table called java_objects 
that defines a column called type whose value matches the concatenation 
of package_name and class_name. A typical join and result looks like this:


SELECT package_name, class_name, type FROM java_objects o INNER JOIN 
java_types t ON (t.package_name || '.' || t.class_name) = o.type;


package_name, class_name, type
java.lang , String  , java.lang.String

The above works, although it is slow for large data sets so I defined 
the following index:


CREATE INDEX java_type_hash
  ON java_types
  USING hash
  (((package_name::text || '.'::text) || class_name::text));

I confirmed that my new index is being used by inspecting the query plan 
and finding the following:


-  Index Scan using java_type_hash on java_types (cost=0.00..1.22 
rows=1 width=49)
   Index Cond: java_types.package_name)::text || '.'::text) || 
(java_types.class_name)::text) = (java_objects.type)::text)


This gave me a speed boost and worked initially. A day after defining 
the index, however, I inserted a few rows into java_types and then many 
rows into java_objects. When I ran the aforementioned query I got 0 
results. A month later (without restarting postgres), I ran the same 
query and still got 0 results. I suspected the index had either been 
corrupted or not updated automatically. I confirmed this by running the 
following:


REINDEX INDEX java_type_hash;

And then when I re-ran the same query the correct results came back! I 
thought that maybe my system wasn't set to autovacuum but a query for my 
current settings confirmed that it was:


autovacuum = on
autovacuum_analyze_scale_factor = 0.1
autovacuum_analyze_threshold = 256
autovacuum_freeze_max_age = 2
autovacuum_max_workers = 3
autovacuum_naptime = 60
autovacuum_vacuum_cost_delay = 20
autovacuum_vacuum_cost_limit = -1
autovacuum_vacuum_scale_factor = 0.2
autovacuum_vacuum_threshold = 512
track_counts = on

Finally, this query gave me some extra information about when my system 
had last been vacuumed/analyzed:


SELECT relname, last_vacuum, last_autovacuum, last_analyze, 
last_autoanalyze FROM pg_stat_all_tables WHERE relname = 'java_types'


rename  , last_vacuum , last_autovacuum , 
last_analyze, last_autoanalyze
java_types, 2011-10-04 13:37:03.867-07,   , 
2011-10-04 13:21:22.342-07, 2011-08-30 10:15:13.018-07


The index was created on 8/30/2011, inserts were done on java_types and 
java_objects on 8/31/2011, and then 10/4/2011 is when I manually ran 
REINDEX (and later VACUUM ANALYZE).


So after all that, the questions is: If I have autovacuum set to true 
then I shouldn't have to worry about calling VACUUM/ANALYZE/REINDEX 
manually to update my java_types_hash index, right? Maybe my settings 
are wrong or I'm making an incorrect assumption about Postgres behavior.


Any pointer in the right direction helps. Thanks in advance.

--


 Justin Naifeh Software Developer

Voice:
303-460-7111 x1 Toll Free:
877-444-3074 x1 Cell:
720-363-8874AIM, Yahoo
justinnaifeh aim:goim?screenname=justinnaifeh

www.terraframe.com/products/runwaysdk 
http://www.terraframe.com/products/runwaysdk

Makers of TerraFrame RUNWAY SDK^(TM), the next-generation
model-driven engineering (MDE) application toolkit for software developers



[GENERAL] pg_upgrade from 8.3.4 issue

2011-08-30 Thread Justin Arnold
Hey, I am trying to upgrade a CentOS 5.4 32bit test server running postgres
8.3.4 to postgres 9.1 RC1 and am running into an error I haven't seen
mentioned in the forums (at least dealing with the upgrade process). The
steps I ran through for the upgrade are...

Stop postgres
move /usr/local/pgsql to /usr/local/pgsql.8.3
move /usr/pgdata/data to /usr/pgdata/data.8.3
build 9.1 RC1 from source using ./configure --with-perl --with-openssl
--disable-integer-datetimes; make; make install
build and install pg_upgrade and pg_upgrade_support
swap to postgres user
run /usr/local/pgsql/bin/initdb --lc-collate=C --lc-ctype=C
--lc-messages=C --lc-monetary=C --lc-numeric=C --lc-time=C -E SQL-ASCII -D
/usr/pgdata/data to create the 9.1 cluster and set the settings to match
the old cluster
/usr/local/pgsql/bin/pg_upgrade --link --old-datadir /usr/pgdata/data.8.3/
--new-datadir /usr/pgdata/data/ --old-bindir /usr/local/pgsql.8.3/bin/
--new-bindir /usr/local/pgsql/bin/

What I get is...
Performing Consistency Checks
-
Checking current, bin, and data directories ok
Checking cluster versions   ok
Checking database user is a superuser   ok
Checking for prepared transactions  DB command
failed
SELECT * FROM pg_catalog.pg_prepared_xact()
ERROR:  a column definition list is required for functions returning
record

Failure, exiting

The binaries for the 8.3.4 install were built from source using
the --with-perl --with-openssl options as well. Any thoughts on what I might
be able to do to fix or workaround this? Thanks!

- Justin


Re: [GENERAL] pg_upgrade from 8.3.4 issue

2011-08-30 Thread Justin Arnold
Thanks Tom and Merlin, I removed that logic from check.c, rebuilt, and it
worked fine.

On Tue, Aug 30, 2011 at 2:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:

 Merlin Moncure mmonc...@gmail.com writes:
  It looks like some time after 8.3 was released that function was
  changed from returning 'record'.  This is making me wonder if the
  upgrade process was ever tested/verified on 8.3.

 Not lately, apparently :-(

  I absolutely do not
  advise doing this without taking a lot of precautions, but you might
  force your way past that step with:

 I think it'd be a lot safer to modify (or just remove) the test in
 pg_upgrade.  It looks like a one-liner:

prep_status(Checking for prepared transactions);

res = executeQueryOrDie(conn,
SELECT * 
FROM pg_catalog.pg_prepared_xact());

if (PQntuples(res) != 0)
pg_log(PG_FATAL, The %s cluster contains prepared transactions\n,
   CLUSTER_NAME(cluster));

 There's no reason at all for this code to not use the published API,
 which is the pg_prepared_xacts system view.

regards, tom lane



[GENERAL] PostgreSQL 8.4.8 RPM/SRPM for RHEL4

2011-07-21 Thread Justin Pasher

Hello,

I'm having trouble finding the RPM/SRPM for the latest version of 
PostgreSQL 8.4.8 on RHEL4. I only need the SRPM if the standard RPM 
doesn't use --enable-integer-datetimes (I recall reading at one point 
that the default may be changing to --enable-integer-datetimes, but I 
can't remember which version might have started that).


I've tried the following URLs, but the latest version I can find is 8.4.6.

http://yum.pgrpms.org/8.4/redhat/rhel-4-i386/
http://yum.pgrpms.org/srpms/8.4/redhat/rhel-4-i386/

If I switch over to the RHEL5 repository, it has 8.4.8, so perhaps the 
changes just never got pushed to the RHEL4 repository? Thanks.


--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Automatic database monitoring tool for PostgreSQL ... new project

2011-01-31 Thread Michael Justin

Hello,


Two weeks ago I started a new open source software project (FireAlarm, 
http://sourceforge.net/projects/firealarm/) which helps to detect 
performance problems on Firebird SQL servers - by continuous analysis of 
system tables which are very similar to the PostgreSQL Statistics Views.


While I am still new to PostgreSQL and have real world experience with 
DB2 InterBase and Firebird only I am interested to write a similar tool 
for PostgreSQL.


Maybe there are articles or discussions which help to get an 
introduction to the practical usage of PostgreSQL Statistics Views, like 
typical queries used to identify potential problems? Just to give an 
example: on InterBase and Firebird, long running transactions can cause 
performance problems, and a typical system monitoring task is to find 
the clients or processes with the oldest transactions (older than x 
minutes).



Regards
--
Michael Justin
http://www.habarisoft.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Easy way to convert a database from WIN1252 to UTF8?

2010-07-01 Thread Justin Graf


On 7/1/2010 11:08 AM, Mike Christensen wrote:
 I'd like to convert a small database to UTF8 before it becomes too
 large.  I'm running on 8.3.x on Windows.  It doesn't seem that pgAdmin
 has any native way of doing this, what's the easiest way to go about
 doing this?  Thanks!

 Mike


Dump/Backup the database , then create a new database using utf-8
then restore the database.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
attachment: justin.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Where has ms2pg gone?

2010-06-10 Thread Justin Graf
On 6/10/2010 4:12 AM, Thom Brown wrote:
 Does anyone know if ms2pg is available from somewhere other than
 http://edoceo.com/creo/ms2pg ?  Attempts to download it result in not
 found.  Unless someone knows of an alternative attempt to automate
 migration of MSSQL to PostgreSQL?

 Thanks

 Thom



MSSQL has many tools that can automate moving the data and creating the 
tables using ODBC connection.
http://en.wikipedia.org/wiki/SQL_Server_Integration_Services
http://en.wikipedia.org/wiki/Data_Transformation_Services

There is also export functions along with table and column mapping tools 
built into SQL Management Studio  that makes moving data simple and 
straight forward.

moving the stored procedures, triggers, views, and indexes are bit of a 
problem,  Transact and pl/pgSQL are far to different.


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
attachment: justin.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cognitive dissonance

2010-06-08 Thread Justin Graf
On 6/8/2010 9:23 AM, Peter Hunsberger wrote:
 On Tue, Jun 8, 2010 at 4:04 AM, John Gagejsmg...@numericable.fr  wrote:

 Unix is a text-based operating system with unbelievably helpful text
 manipulation tools.

 Postgres is a creature of Unix which happens to have unbelievable text
 searching and manipulation tools.

 Yet, the only one file edition of the Postgres documentation is in...pdf
 format.  Huh?

  
 I suppose the next thing you'll be suggesting is that, because
 Postgres is a database, the documentation should be stored as some
 form of searchable table within the database itself?

 runs and hides/



Its also available in chm  windows help file format.  Which i find allot 
more useful
http://www.postgresql.org/docs/manuals/
you could print chm to a text file.

also it not hard to dump a PDF document into a text file.




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
attachment: justin.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cognitive dissonance

2010-06-08 Thread Justin Graf
***SNIP***
 2) Its also available in chm  windows help file format.  Which i find 
 allot
 more useful
 http://www.postgresql.org/docs/manuals/
 you could print chm to a text file.

 --I'll have to boot over to XP, ugh.  Will do.
There are linux chm readers

http://www.linux.com/news/software/applications/8209-chm-viewers-for-linux

and one for firefox
https://addons.mozilla.org/en-US/firefox/addon/3235/


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
attachment: justin.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to debug efficiently

2010-06-03 Thread Justin Graf
On 6/3/2010 5:43 AM, Jamie Lawrence-Jenner wrote:

 Hi All

 In SQL Server I could copy sql code out of an application and paste it 
 into SSMS, declare  assign vars that exist in the sql and run.. yay 
 great debugging scenario.

 e.g. (please note I am rusty and syntax may be incorrect)

 declare @x as varchar(10)
 set @x = 'abc'
 select * from sometable where somefield = @x

 I want to do something simular with postgres in pgadmin3 (or another 
 postgres tool, anyy reccomendations?)

 I realise you can create pgscript, but it doesn't appear to be very 
 good, for example, if I do the equlivent of above, it doesn't put the 
 single quotes around the value in @x, nor does it let me by doubling 
 them up and you don't get a table out after - only text...

 Currently I have a peice of sql someone has written that has 3 unique 
 varibles in it which are used around 6 times each...

 So the question is how do other people debug sql this sql EFFICIENTLY, 
 preferably in a simular fashion to my sql server days.



by pgscript I take you  meaning pl/pgsql   which unlike Transact-SQL is 
actually useful.

to debug in PG with pgadmin we have

http://pgfoundry.org/projects/edb-debugger/

that makes debugging pl/pgsql very easy and it works with pgadmin and

http://www.sqlmaestro.com/products/postgresql/maestro/tour/pgsql_debugger/

Your little example would like so in pl/pgsql
-
Create or Replace function MyTest()
returns integer AS
$BODYOFFUNCTION$

declare
x text = 'abc';

Begin

perform (select * from sometable where somefield = x);

end;
return 1 ;
$BODYOFFUNCTION$
   LANGUAGE 'plpgsql' VOLATILE
   COST 100;


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
attachment: justin.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] child/parent creation

2010-05-29 Thread Justin Graf
On 5/29/2010 1:05 PM, Dennis Gearon wrote:
 Is it possible to create a complex schema object in one transaction,
I'm not sure i understand what you mean by schema object

 using prepared statements to protect(somewaht) against SQL injection?


In short no

Prepared statements do  not protect from SQL injection.  Prepared 
statements are used to skip the planning stage of the query.

There are really only two ways to block SQL injection   parameterized 
queries or  check/remove command characters from the query sent to the DB


 Example: A 'family tree object' (having obvious relationships) consisting of:

 Table grandparent
 Table parent
 table childA
 table childB

 If I have all the information for each 'sub-object' in the 'family tree 
 object', but of course, the primary, integer, sequence keys.

 So, using a script language, the procedure I'm doing now is creating one 
 node, getting the id with another query (because of a (reported and actual) 
 bug in the Doctrine ORM), and then creating the next level down.

 My application has nothing to do with family trees, actually, just an example.


 each of the (relevant) primary keys is a BIGSERIL, sequence backed, BIGINT.



I do not understand what you are trying to do here, please clarify







All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
attachment: justin.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing version 8.4

2010-05-29 Thread Justin Graf
On 5/29/2010 6:26 PM, Bob Pawley wrote:
 Found it in XP it doesn't seem to exist in Windows 7. I can't even 
 find Doc and Settings in 7.

 It's a large file. I'm not sure what is needed but here is the latter 
 part of the file.

 Bob

***Snip***

Windows 7 and vista move lots of things around

Documents and Settings is in the root directory but normally hidden and 
secured turn on show hidden files/folders and show system 
files/folders.  form the start menu,  type the document name out and the 
OS will find it for you.

Windows Vista was/is the first real attempt by MS to setup a OS in a 
secure manner.  it kinda like having to run sudo in the linux world.  
Still allot of things need to be improved namely the click through 
interface, should be pass worded







All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
attachment: justin.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Please help me write a query

2010-05-27 Thread Justin Graf
On 5/27/2010 9:04 AM, Nikolas Everett wrote:
 Say I have a table that stores state transitions over time like so:
 id, transitionable_id, state1, state2, timestamp

 I'm trying to write a query that coalesces changes in state2 away to
 produce just a list of transitions of state1.  I guess it would look
 something like

 SELECT state1, FIRST(timestamp)
 FROM table

 but I have no idea how to aggregate just the repeated state1 rows.

if i understand what your  after

Select distinct transitinable_id, state1,  min(timestamp) from table 
group by  transitinable_id, state1















All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Please help me write a query

2010-05-27 Thread Justin Graf
On 5/27/2010 9:45 AM, Nikolas Everett wrote:
 Sorry.  Here is the setup:
 CREATE TABLE test (id BIGSERIAL PRIMARY KEY, state1 INT NOT NULL, 
 state2 INT NOT NULL, timestamp TIMESTAMP);
 INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() - 
 interval '12 hours');
 INSERT INTO test (state1, state2, timestamp) VALUES (1, 2, now() - 
 interval '11 hours');
 INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() - 
 interval '10 hours');
 INSERT INTO test (state1, state2, timestamp) VALUES (2, 1, now() - 
 interval '9 hours');
 INSERT INTO test (state1, state2, timestamp) VALUES (1, 1, now() - 
 interval '8 hours');


 I want to write a query that spits out:
  state1 | timestamp
 +
   1 | now() - interval '12 hours'
   2 | now() - interval '9 hours'
   1 | now() - interval '8 hours'


Have a question what makes  these values different other than the 
timestamp???

1, 1, now() - interval '12 hours'
*1, 1, now() - interval '10 hours'*

The reason i ask, is because you show *1, 1, now() - interval '8 
hours'*  in the desired output.   What logic keeps the 8 hour and 12 
hour but not the 10hour interval???

Its kinda hard to understand why the 10hour interval is being skipped???


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
attachment: justin.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hiding data in postgresql

2010-05-25 Thread Justin Graf
On 5/24/2010 3:18 PM, Hector Beyers wrote:
 Yes, I mean hide. I am approaching the problem out of the perspective 
 of a malicious user / hacker.

 **snip***

First hiding data is not a solution to secure or block access to 
information.  This only slows people down it does not stop them,  never 
underestimate users with access to the data

It would be helpful  to explain the type of data that needs to be 
hidden/secured

Example of failed attempts to hide data is to look at the numerous 
mistakes in securing credit card data at many Companies.


In almost every case that i have read the programmers just tried to hide 
the data or limit access instead of doing Public Key Private Key 
encryption methodology .I know of several  big name apps that still 
store credit card data where the end users can reverse the encryption 
meaning if the key becomes unsecured any the data is visible that is 
encrypted.

I have seen where the data is only encrypted inside the database so the 
information is transmitted in the clear to the client as the database 
decrypted the data on the fly .  What is the point??


Trying to hide information is waste of time and energy look into 
encryption.









All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
attachment: justin.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Hiding data in postgresql

2010-05-25 Thread Justin Graf
On 5/25/2010 2:58 AM, Hector Beyers wrote:

 No, I have not considered encrypting or decrypting data. The reason 
 for this is that I am trying to /secure a database/ by thinking like a 
 /malicious user / criminal/. I want to hide (for example) fraudulent 
 data on a database where it is not easily seen by others and then 
 build a tool to detect this hidden data.

 On your questions:

 *) What data is to remain secret?
 *) Who is allowed to see the secret data?
 *) When do they see it?
 *) What sacrifices are you willing to make to keep the data secret?
 *) Where are you going to store the key?

 the answers:

 * fraudulent data / or data that needs to be hidden.
 * only the malicious user - and hopefully later a detection
   mechanism that I aim to build.
 * I don't really have a preference on when they can see the data,
   but maybe when you export a dump.
 * The main purpose of hiding the data is that the normal users of
   the database will not easily find the hidden data. If this
   criteria is met, then any other sacrifices can be made.
 * Still need to figure that one out.


 Any good brainstorming ideas will help!

Missed this bit prior to first responds.

I think some of the assumptions here are flawed.

If hacker actually got into a database why would they do this???  what 
is being accomplished???  why would anyone want to do this???

Again it would make allot more sense if a hacker stored data in plain 
site.  Create  tables that look like real tables following the same 
naming schema or use already existing tables like logs, Modify the 
tables adding columns to store data.  Then create/update records 
encrypting the contents, this would protect the contents from ever being 
read by anyone except by the creator.

Think this line through  how long would a Hacker go unnoticed if they 
used the already existing tables adding in columns or take over stale 
records like old customers that are no longer active.  Then use the text 
fields to store data.  The hacker could create normal user account to 
access those records throwing up no red flags.  How many people review 
table structures or update to already existing records.

The current crop of hackers are not hexeditor high-school wannabe's. 
Hackers want to go unnoticed for as long as they can so that means doing 
nothing out of ordinary that throws up red flags.

Just read up on the investigations on stolen credit cards.  Or fake ATMS 
that's been installed at malls.  The hackers/thieves figured out how to 
go unnoticed for long periods of time by appearing normal.

Second assumption is the hacker actual got a admin/root  level access  
to be able to do these kind of things.  This means security upfront was 
lacks which point there are far bigger problems than hidden data.

Far better way to secure is not trying think what they can do once they 
get access, but stop them getting in to start with.If anyone gets 
this high level of access protecting from or figuring out if they have 
hidden data is immaterial to the problems someone has.









All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
attachment: justin.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres stats collector showing high disk I/O

2010-05-20 Thread Justin Pasher

- Original Message -

From: Justin Pasher just...@newmediagateway.com
Date: Fri, 23 Apr 2010 17:46:16 -0500
Subject: Re: [GENERAL] Postgres stats collector showing high disk I/O
To: Alvaro Herrera alvhe...@commandprompt.com
CC: pgsql-general@postgresql.org

- Original Message -
From: Alvaro Herrera alvhe...@commandprompt.com
Date: Fri, 23 Apr 2010 18:28:03 -0400
Subject: Re: [GENERAL] Postgres stats collector showing high disk I/O
To: Justin Pasher just...@newmediagateway.com
CC: dep...@depesz.com, pgsql-general@postgresql.org

Justin Pasher wrote:

 

Agh... I used pg_stats_reset (with an s) when searching for it. I
ran the function and it returned true, but the stats file only
shrunk by ~100k (still over 18MB total). Is there something else I
need to do? Does this mean the file is mostly bloated with bogus
data that it can't reset? I'm guessing I should just try to delete
the file outright?



Err, yeah, pg_stat_reset only resets the stats for the current database.
You need to reset for all databases, or alternatively, shut down the
server, remove the file, and restart
  


Ahh, yes. I probably should have realized that. I ran the function on 
all of the databases (138 total), and now the stats file is down to 
~400k. The disk I/O is also practically nothing now.


So now as a continuation of my original message, what would cause the 
stats file to get so big for what seems like (IMO) a small number of 
databases? I have a Postgres 7.4 cluster that has about 250 databases, 
but it's stats file is only 3.5MB. Do I need to look into avoiding a 
bunch of CREATE/DROP statements (a suggestion in the archives)? I 
don't know the actual usage patterns of the code base (I'll have to 
get with the developers), but I think at worst, they might create temp 
tables from time to time. As a matter of fact, I just checked the 
stats file again (about 10 minutes later) and it's doubled to 800K. Is 
Postgres just trying to store too much information in the statistics 
file? Ultimately, the main statistics I care about are current 
connections and queries being run.


A previous post in the archives from Tom said that vacuum (even 
autovacuum) should clean up potential stat file bloat. Do I need to 
tweak my autovacuum settings? Everything is currently set to the 
defaults because I've never had any performance issues that warranted 
tweaking the settings.


Can anyone provide any more direction on this one?

Whenever I clear out the stats for all of the databases, the file 
shrinks down to 1MB. However, it only takes about a day for it to get 
back up to ~18MB and then the stats collector process start the heavy 
disk writing again. I do know there are some tables in the database that 
are filled and emptied quite a bit (they are used as temporary queue 
tables). The code will VACUUM FULL ANALYZE after the table is emptied to 
get the physical size back down and update the (empty) stats. A plain 
ANALYZE is also run right after the table is filled but before it starts 
processing, so the planner will have good stats on the contents of the 
table. Would this lead to pg_stat file bloat like I'm seeing? Would a 
CLUSTER then ANALYZE instead of a VACUUM FULL ANALYZE make any 
difference? The VACUUM FULL code was setup quite a while back before the 
coders knew about CLUSTER.


Thanks.

--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why Performance of SQL Query is *much* Slower in GUI PgAdmin

2010-05-17 Thread Justin Graf
On 5/17/2010 12:52 AM, Yan Cheng CHEOK wrote:
 The pgadmin result seems different with my machine. My friend and I are using 
 Windows machine. Are you using Linux machine?

 Thanks and Regards
 Yan Cheng CHEOK


 **snip**

I use both windows and Linux using pgadmin, and on occasion use psql


I take my work home so I backup then drop then recreate the databasse on 
my computers at home.  So I create the database at least 2 or 3 times a 
week.  I see no meaningful difference between machines or client used to 
create a DB using either Windows or Linux.   I have never timed it.  Why 
would anyone time creating a DB???  But i would say its less than second.

So something weird is going with your friends computer.


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.
attachment: justin.vcf
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reliability of Windows versions 8.3 or 8.4

2010-05-12 Thread Justin Graf
On 5/12/2010 11:45 AM, Richard Broersma wrote:
 Can anyone advise me if either PostgreSQL 8.3 or 8.4 is ready for
 special case of production use?

 I'm considering using the windows version PostgreSQL in the following
 conditions:
 at least 10 years of up time (with periodic power failures= 1 a year)
 single table with less-than 50 record inserts a day
 reporting at most once a month by a single connection

I question any database on the market that will guarantee such a thing.  
Power in industrial plants is some of the dirtiest.  brown outs, spikes, 
surges, harmonics, and the list keeps going.  That is not the best 
environment for computers, even industrial ones.

Given how few records are being inserted a day a full database like 
Postgresql is over kill.

I would do a plain text file something like XML.  Given this is for  
industrial use 10 years is a good number for warranty and support, but 
this stuff will hang around years later, think 20 to 30 years.  How  
many people understand FLAT ISAM tables from the 1980's today, let alone 
tools to read/modify the records.

I suggest storing the records in manner that is human readable



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Reliability of Windows versions 8.3 or 8.4

2010-05-12 Thread Justin Graf
On 5/12/2010 12:33 PM, Richard Broersma wrote:
 On Wed, May 12, 2010 at 9:18 AM, Justin Grafjus...@magwerks.com  wrote:


 I would do a plain text file something like XML.  Given this is for
 industrial use 10 years is a good number for warranty and support, but
 this stuff will hang around years later, think 20 to 30 years.  How
 many people understand FLAT ISAM tables from the 1980's today, let alone
 tools to read/modify the records.

 I suggest storing the records in manner that is human readable
  
 These are all good points.  There is one concern that I do have, this
 information will be used to audit the billing system.  Is there any
 concern for loss of data if a file rewrite is interrupted by a power
 failure?   When using postgres there are some protections provided to
 reduce this kind of data loss.

 However, I do agree that tabular/xml data would stand the test of time.

Text files are a little hard to corrupt to the point nobody can read them.

Obviously if the system is in the middle of writing  the XML node  and 
looses power the XML layout is toasted.
To limit data lose  create new XML file each day or month or what ever 
time period makes sense.  If you end up with a screwed XML file kiss it 
off or note it so a person can manually fix the entries and get most of 
the data back.


50 records * 365 * 10 = 182500 records.  Not allot of data for 10 years 
of collecting,  but very big for a single XML file


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Latest source RPMs for 8.1.20

2010-05-04 Thread Justin Pasher

- Original Message -

From: Devrim GÜNDÜZ dev...@gunduz.org
Date: Tue, 04 May 2010 07:18:47 +0300
Subject: Re: [GENERAL] Latest source RPMs for 8.1.20
To: Justin Pasher just...@newmediagateway.com
CC: pgsql-general@postgresql.org

On Mon, 2010-05-03 at 10:49 -0500, Justin Pasher wrote:
  

I'm looking for the latest source RPMs for Postgres 8.1.20 on RHEL



http://yum.pgrpms.org/srpms/8.1/redhat/rhel-5Server-x86_64/repoview/postgresql.html

Regards,


I forgot to mention I'm using RHEL4 on this box, but that link pointed 
me in the right direction. Thanks!


--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Latest source RPMs for 8.1.20

2010-05-03 Thread Justin Pasher
I'm looking for the latest source RPMs for Postgres 8.1.20 on RHEL. I 
can see 8.1.19 here:


http://yum.pgsqlrpms.org/srpms/8.1/redhat/rhel-4-i386/repoview/postgresql.html

Would it be safe to say that I can download the 8.1.19 source RPM, 
replace the postgresql-8.1.19.tar.bz2 tarball with the 
postgresql-8.1.20.tar.bz2 tarball, update the versions in the spec file, 
then build the RPM? I noticed there are other patch files installed by 
the source RPM, so I didn't know if I would be missing any other 
potential patch files.


Thanks.

--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Latest source RPMs for 8.1.20

2010-05-03 Thread Justin Pasher

- Original Message -

From: Vincenzo Romano vincenzo.rom...@notorand.it
Date: Mon, 3 May 2010 17:59:10 +0200
Subject: Re: Latest source RPMs for 8.1.20
To: Justin Pasher just...@newmediagateway.com
CC: pgsql-general@postgresql.org

2010/5/3 Justin Pasher just...@newmediagateway.com:
  

I'm looking for the latest source RPMs for Postgres 8.1.20 on RHEL. I can
see 8.1.19 here:


Just curiosity: why are you using  8.1?
  


Older app that has not been tested for 8.3+ compatibility (e.g. stricter 
type casting). It's not a super high priority for the developers. We 
even have some legacy sites running 7.4. I'd love to get everything 
moved to a more recent version...


--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing many big files in database- should I do it?

2010-04-29 Thread Justin Graf
On 4/29/2010 12:07 PM, David Wall wrote:


 Big downside for the DB is that all large objects appear to be stored 
 together in pg_catalog.pg_largeobject, which seems axiomatically 
 troubling that you know you have lots of big data, so you then store 
 them together, and then worry about running out of 'loids'.
Huh ???  isn't that point of using bytea or text datatypes.

I could have sworn bytea does not use large object interface it uses 
TOAST or have i gone insane

Many people encode the binary data in Base64  and store as text data 
type??  Then never have to deal with escaping  bytea data type. Which i 
have found can be a pain

All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing many big files in database- should I do it?

2010-04-29 Thread Justin Graf
On 4/29/2010 1:51 PM, David Wall wrote:

 Put it another way: bytea values are not stored in the pg_largeobject
 catalog.

 I missed the part that BYTEA was being used since it's generally not a 
 good way for starting large binary data because you are right that 
 BYTEA requires escaping across the wire (client to backend) both 
 directions, which for true binary data (like compressed/encrypted 
 data, images or other non-text files) makes for a lot of expansion in 
 size and related memory.

 BYTEA and TEXT both can store up to 1GB of data (max field length), 
 which means even less file size supported if you use TEXT with 
 base64 coding.  LO supports 2GB of data.  In JDBC, typically BYTEA is 
 used with byte[] or binary stream while LOs with BLOB.  I think LOs 
 allow for streaming with the backend, too, but not sure about that, 
 whereas I'm pretty sure BYTEA/TEXT move all the data together you it 
 will be in memory all or nothing.

 Of course, to support larger file storage than 1GB or 2GB, you'll have 
 to create your own toast like capability to split them into multiple 
 rows.

 David

Outside of  videos/media streams what other kind of data is going to be 
1gig in size.  Thats  allot of data still even still  today.

We all talk about 1 gig and 2 gig limits on this, but really who has 
bumped into that on regular bases???  Every time i hear about that not 
being big enough the person is trying to shoe horn in media files into 
the database,  which is insane


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Storing many big files in database- should I do it?

2010-04-29 Thread Justin Graf
On 4/29/2010 3:18 PM, Tom Lane wrote:
 Alvaro Herreraalvhe...@commandprompt.com  writes:

 However, that toast limit is per-table, whereas the pg_largeobject limit
 is per-database.  So for example if you have a partitioned table then
 the toast limit only applies per partition.  With large objects you'd
 fall over at 4G objects (probably quite a bit less in practice) no
 matter what.

   regards, tom lane

has there been any thought of doing something similar to MS filestream 
http://msdn.microsoft.com/en-us/library/cc949109.aspx

it seems to overcome all the draw backs of storing files in the DB.


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Postgres stats collector showing high disk I/O

2010-04-23 Thread Justin Pasher

Hello,

Redhat EL4 update 8, 2.6.9-89.0.23.ELsmp
Quad Proc, Dual Core Xeon, 16GB RAM
Postgres 8.1.18

I'm having some trouble pinning down exactly what is causing our 
Postgres cluster to run slowly. After some initial investigation, I 
noticed that the disk write activity is consistently high, and (if I'm 
reading the output of dstat correctly) the majority of it is being 
caused by the stats collector process. Here's a snippet of what I 
typically see in dstat.


./dstat -cd --top-bio
total-cpu-usage -dsk/total- most-expensive-
usr sys idl wai hiq siq| read  writ|  block i/o process
 2   2  87  10   0   0|3964k   19M|postgres: stats coll   035M
 2   1  85  12   0   0|4612k   20M|postgres: stats coll   018M
 2   2  85  11   0   0|2360k   36M|postgres: stats coll   024M
 1   2  83  14   0   0|1564k   36M|postgres: stats coll   029M
 1   1  84  13   0   0|5556k   21M|postgres: stats coll   020M
 2   2  82  14   0   0|  10M   19M|postgres: stats coll   033M
 2   1  87  10   0   0|9864k   35M|postgres: stats coll   024M
 2   2  87  10   0   0|  10M   19M|postgres: stats coll   029M
 2   1  86  11   0   0|  10M   19M|postgres: stats coll   024M
 3   2  84  12   0   0|8096k   19M|postgres: stats coll   029M
 2   1  86  10   0   0|5432k   33M|postgres: stats coll   032M
 2   2  86  10   0   0|9200k   19M|postgres: stats coll   021M
 2   1  82  14   0   0|3344k   34M|postgres: stats coll   021M
 2   2  86  11   0   0|8600k   19M|postgres: stats coll   031M
 2   1  82  15   0   0|5392k   19M|postgres: stats coll   029M

If there are no queries going on, then the disk usage is virtually 
nothing, but it only takes a query or two to make it shoot up to this 
level. I have the following stats related options enabled in postgresql.conf


stats_command_string = on
stats_row_level = on

When I disabled stats_row_level (and even stats_command_string, I 
believe) and restarted, I was still seeing some high disk I/O. If I 
disable stats_start_collector, I'm pretty sure the I/O dropped 
completely off (I can't verify right now since I'd need a maintenance 
window). However, this make Postgres unable to keep track of database 
connections/queries in pg_stat_activity, which is very important for us. 
The odd thing is that when I was playing around with these options, I 
restarted multiple times to apply them, eventually ending back where I 
started, but after the final restart, the disk I/O actually dropped to 
reasonable levels. This lasted for about a day, then went back up to 
it's current levels (and once again showing the stats collector at the top).


I saw some previous posts with similar conditions (but different 
Postgres version, high CPU load, not disk I/O, etc). 
http://archives.postgresql.org/pgsql-performance/2010-04/msg00163.php

http://archives.postgresql.org/pgsql-general/2010-01/msg01076.php
http://archives.postgresql.org/pgsql-performance/2009-06/msg00088.php

I don't think there are a lot of CREATE/DROP table statements, but I do 
know there are some larger update queries that run inside transactions 
(large in the sense of data they have to read, not the number of 
queries). Autovacuum is enabled on the server, and I haven't tweaked any 
settings from the defaults. My $PGDATA/global/pgstat.stat file is about 
18MB, if that helps. Does it really rewrite this entire file every 
500ms? Alvaro suggested resetting the stats, but I'm having trouble 
figuring out how to do that. Seems like pg_stat_reset() is post- 8.1 ...?


I have a strong suspicion it's ultimately due to some usage pattern of 
the database, but I'm not sure what it could be. What type of operations 
would typically cause the stats collector to be doing this much writing 
to the filesystem? Is there any way to see what it's writing? Are 
there other config options that can/should be tweaked to help this?


Thanks.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres stats collector showing high disk I/O

2010-04-23 Thread Justin Pasher

- Original Message -
From: hubert depesz lubaczewski dep...@depesz.com
Date: Fri, 23 Apr 2010 23:40:35 +0200
Subject: Re: [GENERAL] Postgres stats collector showing high disk I/O
To: Justin Pasher just...@newmediagateway.com
CC: pgsql-general@postgresql.org

On Fri, Apr 23, 2010 at 03:27:55PM -0500, Justin Pasher wrote:
  

haven't tweaked any settings from the defaults. My
$PGDATA/global/pgstat.stat file is about 18MB, if that helps. Does
it really rewrite this entire file every 500ms? Alvaro suggested
resetting the stats, but I'm having trouble figuring out how to do
that. Seems like pg_stat_reset() is post- 8.1 ...?



I don't have 8.1 handy, but according to this:
http://www.postgresql.org/docs/8.1/interactive/monitoring-stats.html
pg_stat_reset() should be available in 8.1.

Best regards,

depesz
  


Agh... I used pg_stats_reset (with an s) when searching for it. I ran 
the function and it returned true, but the stats file only shrunk by 
~100k (still over 18MB total). Is there something else I need to do? 
Does this mean the file is mostly bloated with bogus data that it can't 
reset? I'm guessing I should just try to delete the file outright?


--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres stats collector showing high disk I/O

2010-04-23 Thread Justin Pasher

 I'm guessing I should just try to delete the file outright?


Err... I meant should NOT delete.

--
Justin Pasher


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres stats collector showing high disk I/O

2010-04-23 Thread Justin Pasher

- Original Message -
From: Alvaro Herrera alvhe...@commandprompt.com
Date: Fri, 23 Apr 2010 18:28:03 -0400
Subject: Re: [GENERAL] Postgres stats collector showing high disk I/O
To: Justin Pasher just...@newmediagateway.com
CC: dep...@depesz.com, pgsql-general@postgresql.org

Justin Pasher wrote:

  

Agh... I used pg_stats_reset (with an s) when searching for it. I
ran the function and it returned true, but the stats file only
shrunk by ~100k (still over 18MB total). Is there something else I
need to do? Does this mean the file is mostly bloated with bogus
data that it can't reset? I'm guessing I should just try to delete
the file outright?



Err, yeah, pg_stat_reset only resets the stats for the current database.
You need to reset for all databases, or alternatively, shut down the
server, remove the file, and restart
  


Ahh, yes. I probably should have realized that. I ran the function on 
all of the databases (138 total), and now the stats file is down to 
~400k. The disk I/O is also practically nothing now.


So now as a continuation of my original message, what would cause the 
stats file to get so big for what seems like (IMO) a small number of 
databases? I have a Postgres 7.4 cluster that has about 250 databases, 
but it's stats file is only 3.5MB. Do I need to look into avoiding a 
bunch of CREATE/DROP statements (a suggestion in the archives)? I don't 
know the actual usage patterns of the code base (I'll have to get with 
the developers), but I think at worst, they might create temp tables 
from time to time. As a matter of fact, I just checked the stats file 
again (about 10 minutes later) and it's doubled to 800K. Is Postgres 
just trying to store too much information in the statistics file? 
Ultimately, the main statistics I care about are current connections and 
queries being run.


A previous post in the archives from Tom said that vacuum (even 
autovacuum) should clean up potential stat file bloat. Do I need to 
tweak my autovacuum settings? Everything is currently set to the 
defaults because I've never had any performance issues that warranted 
tweaking the settings.


--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query is stuck

2010-04-14 Thread Justin Graf
On 4/14/2010 9:20 AM, Satish Burnwal (sburnwal) wrote:
   Index Scan using repcopy_index on repcopy a  (cost=0.00..87824607.17
 *rows=28*  width=142) (actual time=11773.105..689111.440*rows=1*  loops=1)
 Index Cond: ((dm_user)::text = 'u3'::text)
 Filter: ((report_status = 0) AND (report_time = (subplan)))
 SubPlan
   -   Aggregate  (cost=3531.30..3531.31 rows=1 width=8) (actual
 time=58.447..58.448 rows=1 loops=11788)
 -   Index Scan using repcopy_index on repcopy b
 (cost=0.00..3526.30*rows=2000*  width=8) (actual time=0.017..36.779
 *rows=25842*  loops=11788)
   Index Cond: ((($0)::text = (dm_user)::text) AND
 (($1)::text = (dm_ip)::text))
   Filter: ((ss_key)::text  ''::text)
   Total runtime: 689111.511 ms
 (9 rows)


The estimated cost and actual are way off.  Have you run Analyze on the 
table

you may want to change the statistics collected for this table

http://www.postgresql.org/docs/8.1/static/planner-stats.html

  ALTER [ COLUMN ]/column/  SET STATISTICS/integer/

http://www.postgresql.org/docs/8.1/static/sql-altertable.html




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [GENERAL] Query is stuck

2010-04-14 Thread Justin Graf
On 4/14/2010 9:42 AM, Bill Moran wrote:

 Man, it's hard to read your emails.  I've reformatted, I suggest you
 improve the formatting on future emails, as I was about to say to
 hell with this question because it was just too difficult to read,
 and I expect there are others on the list who did just that.

I did for the most part.
 select report_id, dm_ip, dm_mac, dm_user,
 dm_os, report_time, sys_name,
 sys_user, sys_user_domain, ss_key,
 login_time, role_id, new_vlan_id
from repcopy as a
where report_time = (
  select max(report_time) from repcopy as b
where a.dm_user=b.dm_user and a.dm_ip = b.dm_ip and b.ss_key != ''
)
  and report_status = 0 and dm_user = 'u3';


I suggest writting something like this.

select report_id, dm_ip, dm_mac, dm_user,
dm_os, report_time, sys_name,
sys_user, sys_user_domain, ss_key,
login_time, role_id, new_vlan_id
   from repcopy as a
(select max(report_time) as rtime,  
dm_user, dm_ip
from repcopy
group by dm_user, dm_ip
where ss_key != '') as materialized
   where report_time = materialized.rtime
and materialized.dm_user = a.dm_user
and materialized.dm__ip = a_ip
and report_status = 0
and dm_user = 'u3';




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [offtopic] How do you name a table...

2010-04-08 Thread Justin Graf
On 4/8/2010 9:30 AM, Bill Moran wrote:
 In response to Ognjen Blagojevicogn...@etf.bg.ac.rs:


 Is this:
 a. Lookup table
 b. Classifier
 c. Cypher(er)?

 I'm looking for the appropriate term in English.
  
 I try to make it an ENUM when it's very unlikely to change, i.e. day of
 the week is a good candidate for an enum ... when's the last time that
 changed?

 For lookup tables that aren't static enough to be an enum, it usually
 ends up appended with _list (i.e. gender_list, county_list, etc)


As  others have said enum for things that are static like day, week, 
month etc..
For things that are not static and change like system/application 
settings  i do something like this

CREATE TABLE syssettings
(
   sys_id serial primary key,
   sys_group text,
   sys_value text,
   sys_displayvalue text,
   sys_datatype text DEFAULT 'text'
)

This allows for easy grouping values together and assigning the data 
type for casting, along with how the information is to be displayed in 
the user interface.

example what the data looks like:  the below data is used to create 
option pull down list on a website.  So if the user of the application 
wants to add more options all they have is make an entry.

15;ExamLevel;1;Level 1;integer
16;ExamLevel;2;Level 2;integer
17;ExamLevel;3;Level 3;integer
18;QuestionOrder;StandardList;Standard not random;text
19;QuestionOrder;RandomList;Random from List;text
20;QuestionOrder;Random;Random From All Questions;text
21;ExamType;MPI;MPI;char(10)
22;ExamType;RT;RT;char(10)
23;ExamType;UT;UT;char(10)
24;ExamType;ECT;ECT;char(10)
25;ExamType;LPI;LPI;char(10)



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] round(x) function

2010-03-26 Thread Justin Graf
On 3/26/2010 12:12 PM, Tom Lane wrote:
 Gaietti, Mauro \(SELEX GALILEO Guest, 
 Italy\)mauro.gaie...@guests.selexgalileo.com  writes:

 This query:
 select round(0.5), round(0.5::integer), round(0.5::bigint), round(
 0.5::float ), round( 0.5::double precision ),round(cast(0.5 as double
 precision )),round(cast(0.5::double precision as numeric )); has strange
 result:
 1 1 1 0 0 0 1
  
 Is this correct?
  
 On most machines the float4/float8 operations follow the IEEE-754 spec's
 round to nearest even rule for such cases.  I think all the other ones
 are add 0.5 and truncate implementations.  It's unlikely we'll mess
 with either; and *highly* unlikely that we would change the float
 behavior since that's not even under our direct control (the hardware or
 libm is what's doing that).  There's some case to be made for making
 numeric and integer math do it the IEEE way, but I think that would
 displease at least as many people as it pleased ...

   regards, tom lane


This topic keeps coming up every few months about rounding which way is 
correct.  I would be in favor of adding a option to round() function to 
specify method.

Leave the defaults as they are for backwards capability, meaning if no 
option is passed to Round() it follows current methods.  I agree 
changing how it works now would annoy many.

The option would be very simple something like this
 Round( 0.5, RoundToEven) = 0
Round( -0.5, RoundToEven) = 0
 Round(0.5, RoundUp) = 1
 Round(-0.5, RoundUp) = 0
 Round(0.5, RoundDown) = 0
 Round(-0.5, RoundDown) = -1
 Round(0.5, RoundToZero) = 0
  Round(-0.5, RoundToZero) = 0

There are so many methods of rounding to choose from
http://en.wikipedia.org/wiki/Rounding


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL]

2010-03-18 Thread Justin Graf
On 3/18/2010 12:52 PM, Scott Mead wrote:
 xtuple ERP does and the latest version of GNUCash can use postgres as
a backend too.

 --Scott M

 On Thu, Mar 18, 2010 at 1:11 PM, Garry Saddington 
ga...@schoolteachers.co.uk wrote:

 Does anyone know of a web based accounting(finance) package that uses
Postgresql as a backend?
 Thanks Garry

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


i don't know about the current version of xtuple but the past ones had
numerous accounting begs, that management deemed not urgent to fix.
Trail Balance does not work
Rounding Errors,
Unit of measure errors
No audit trail in places
Zero traceability in WIP module
No audit trail for past Inventory Count Cycles
Inventory Costing was completely worthless allows users to mix Weighted with
Standard Cost in the same warehouse (breaks accounting standards)
No WIP elevation.  You have a number in the GL but try to figure out what
makes that number up based on whats in WIP tables to make sure the GL agrees
with WIP transactions.  Oh wait they don't record the wip transactions in
fine enough detail.


Thats what i can remember from my previous job.

On the very bright side it introduced me  to all the fine folks at PG


Re: [GENERAL] app table names

2010-03-16 Thread Justin Graf
On 3/16/2010 3:35 PM, Vick Khera wrote:
 On Tue, Mar 16, 2010 at 3:03 PM, Jamie Kahgeejamie.kah...@gmail.com  wrote:

 I'm curious what people consider best practice (or how do you do it)  to
 help ensure these name collisions don't happen.
  
 Do not mix data from multiple applications in one database. Use
 multiple databases to isolate them entirely.


That's not always a practical solution to the problem,

the Apps may need to share numerous tables, duplicating the data and 
keeping it sync can be a pain. Having to open numerous database 
connections to different databases is a resource hog .

what i have been doing of late is defining PG_SCHEMA variable  to tell  
the app where the data is located .  Common tables to many apps go into 
the public schema or a schema that's in the search path.Selects look 
something like this

Select * from  + PG_SCHEMA + foo



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Daylight savings time confusion

2010-03-15 Thread Justin Graf
On 3/15/2010 2:40 PM, Rob Richardson wrote:
 Greetings!

 Our database monitors the progression of steel coils through the
 annealing process.  The times for each step are recorded in wallclock
 time (US eastern time zone for this customer) and in UTC time.  During
 standard time, the difference will be 5 hours, and during daylight
 savings time the difference will be 4 hours.

 I just looked at the record for a charge for which heating started just
 after 9:00 Saturday night, less than 3 hours before the change to
 daylight savings time.  The UTC time stored for this event is six hours
 later!

 The function that writes these times first stores the UTC time in a
 variable named UTCTimestamp:

  select into UTCTimestamp current_timestamp at time zone 'UTC';

 Then, later in the function, the two times get written into the record
 (along with some other stuff):

  update charge set
 status=ChargeStatus,fire_date=current_timestamp,
 fire_date_utc=UTCTimestamp, fire_user=FurnaceTender,
 updated_by=UserId,updated_date=current_timestamp where charge=ChargeNum;

 Can someone explain why fire_date is 2010-03-13 21:39:51.744 and
 fire_date_utc is 2010-03-14 03:39:51.744 for this record?

 There is another charge that began firing five and a half hours before
 the DST switch.  The difference between its fire_date and fire_date_utc
 times is five hours, as expected.

 RobR


My first thought is the server is using libraries that don't know the 
DST was brought forward 3 weeks earlier than last year,  its clock is 
all confused.

i would check the time on Postgresql Server making sure it read out 
correctly.  below was run on pg 8.4  windows 2008 server

Select current_timestamp, current_timestamp at time zone 'UTC';
2010-03-15 16:43:11.382-04;2010-03-15 20:43:11.382


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Naming conventions for lots of stored procedures

2010-03-11 Thread Justin Graf
On 3/10/2010 11:52 PM, Chris Travers wrote:

 There are two major limitations here of schemas:

 1)  They can't be nested leading again to possible namespace ambiguity.
 2)  there are a number of requests to try to get the application to
 install into an arbitrary, nonpublic schema.

 If schemas could be nested this would solve both of these problems.

 However, if the above is anywhere near a complete list of schemas for
 1200 procedures, you must also have some strong naming conventions to
 prevent collisions.  I would be interested in what they are.

 Best wishes,
 Chris Travers


This is an app i took over and there was no strong name convention  plus 
an godly  amount of overloaded procedures.

the procedures use very very long names  example
createardebitmemo(int, text, text date, numeric, text, int, int mint 
date, int int, numeric )
createarcreditmemo(integer, text, text, date, numeric, text, integer, 
integer, integer, date, integer, integer, numeric, integer, integer)

this means Create Accounts Receiver Debit Memo

deleteaccount(integer)
deleteaccountingperiod(integer)
deleteaccountingyearperiod(integer)
deletecustomer(integer)

after the moving the functions into schemas this is how one would/could 
call them.
gl.deleteaccount(integer)
gl.deleteaccountingperiod(integer)
gl.deleteaccountingyearperiod(integer)
ar.deletecustomer(integer)
ar.createardebitmemo(int, text, text date, numeric, text, int, int mint 
date, int int, numeric )
ar.createardreditmemo(integer, text, text, date, numeric, text, integer, 
integer, integer, date, integer, integer, numeric, integer, integer)

Now one problem is if 2 functions have the same name, same number and 
type of inputs then Postgresql will throw ambiguous error,  if the 
search path includes the 2 schemas where  the functions are stored .

I wonder if any database out there allows for nesting schemas.  Which 
i'm at a loss why nesting would help solve any problem what so ever.  I 
imagine the search path on some connections would be all inclusive so 
ambiguous names is not solved.   Also would not be a big fan typing 
something like

AR.Customer.Editing.Delete(ID)

what has been gained???

think if the search path was all inclusive
  AR.Contact.Editing.Delete
WIP.WorkOrder.Delete

and this was called
Select Delete(5784);

Postgresql will through ambiguous error which delete, the one in 
AR.Customer, AR.Contact or WIP.Workorder  schema.

All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Naming conventions for lots of stored procedures

2010-03-10 Thread Justin Graf
On 3/10/2010 8:16 PM, Chris Travers wrote:
 Hi all;

 One of my applications currently has over 60 stored procedures and
 future versions will likely have several hundred.  I am wondering what
 folks find to be helpful naming conventions for managing a large
 number of stored procedures.  We tried using double underscores to
 separate module vs procedure names and that just became a mess.  I
 have found a few possible separators that might possibly work but they
 are aesthetically revolting (_$ for example, like select
 test_$echo(1);).

 I can't imagine I am the first person to run up against this problem
 and would rather ask advice of more experienced folks then to wander
 from one maintenance headache into a possibly far worse one.

 So, what are approaches each of you have taken in the past?

 Best Wishes,
 Chris Traverl


look into schemas.

this allow group table and procedure logically and can limit access 
based on schemas.

what i did is group procedures, views, and tables into schemas  to keep 
them logically grouped.
in one project there is 300 tables, and 1200 procedures
wip  (work in process)
sales
AR
AP
GL
public


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] managing tablespaces like files?

2010-03-09 Thread Justin Graf
On 3/9/2010 12:07 AM, Sam Carleton wrote:
 I would like to thank both John and Scott for the help.  It is very 
 clear to me that PostgreSQL isn't the ideal solution for my current 
 model.  The conversation has gotten me thinking of ways the model 
 could be modified to work with PostgrSQL (and other client/server 
 RDBM).  Thus I will return to the drawing board just to double check 
 to see if there might be a better model.

 Thank you all, this has been truly valuable!

 Sam

Hello Sam:

I've had similar conversation with Application developers who are used 
to working with Access databases aka file based databases. When it comes 
time to backup or move the databases to other computer or share the 
database over a file-share they look at databases as just files to be 
copied around from machine to machine.  No database server like oracle, 
postgresql, mssql, mysql, or db2 allow simple copying of the database to 
another location/server having it start right up. None are going to work 
that way.

These databases expressly deny direct access to the files and make the 
assumption their the only process accessing the files.  Another problem 
you may run into and need to think about is Anti-virus apps have a 
tendency to create all kinds of problems with database servers.

The application can work and make the user life even easier. The 
approach has to be different on how the  backup restore and moving the 
databases around is going to work .

Take a look at a couple of examples how others have solved this problem

One company using postgresql, directly aimed at small companies (where 
there is no IT help let money to pay for IT) wrote a very nice and easy 
to use front end for there application to backup, restore and move the 
application to another server.   go to www.3cx.com  and download there 
app they have a free version check out the instruction for back and 
restore. Its very easy clean interface how they backup and restore the 
database.

Another company that has  very nice and easy to use backup and restore 
of a MySQL database is Gordano Messaging Server.  www.gordano.com  their 
backup, recovery and moving to another server is  very simply.  A simple 
little command line tool backups the database, user profiles, and other 
system settings into a nice little file called settings.txt.  They also 
have gui tool that will automate the backing up of the database.





All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-04 Thread Justin Graf
On 3/4/2010 3:51 AM, Richard Huxton wrote:
 On 04/03/10 01:35, Craig Ringer wrote:

 http://support.microsoft.com/kb/942976/en-us

 Classy. Even better - according to the linked page, the 64 bit version 
 is in the System32 folder - yippee!

 * The 32-bit version of the Odbcad32.exe file is located in the 
 %systemdrive%\Windows\SysWoW64 folder.
 * The 64-bit version of the Odbcad32.exe file is located in the 
 %systemdrive%\Windows\System32 folder.

Some of the naming conventions for Windows 64 suck.   SysWoW64  = System 
Windows on Windows 64  where does anyone get the idea this is where the 
32bit apps are stored.Maintaining the System32 and using it to store 
64 bit apps is Insane.

To pretty much anyone outside MS, a sane human would think 64 bit apps 
in SysWoW64 and 32Bit apps in System32. :'(



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-04 Thread Justin Graf
On 3/4/2010 10:00 AM, Greg Stark wrote:
 On Thu, Mar 4, 2010 at 2:14 PM, Justin Grafjus...@magwerks.com  wrote:

 To pretty much anyone outside MS, a sane human would think 64 bit apps
 in SysWoW64 and 32Bit apps in System32. :'(

  
 Ah, but you all are forgetting that the 32 here is to distinguish it
 from the default odbc interface which as i recall was a *16* bit
 interface. I'm not sure what exactly that means but as a result the
 32-bit odbc configuration is entirely separate from the regular odbc
 configuration. It's not the instruction set that the dll uses it's
 which set of shared data structures it uses and which api it provides


ON the Way back machine we had System and System32.   System32 came 
about in Windows 3.1 if memory serves to separate where 16 and 32 bit 
are placed.  This time MS around is not doing that  System32 is being 
used for 64bit apps and SysWoW64 is where all the legacy 32 bits at 
getting stuck.

32 bit apps in a directory with 64 in its name, and 64 bit apps in 
directory thats intention was/is 32bit apps =-O




All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Justin Graf
On 3/3/2010 3:40 PM, Michael Gould wrote:
 On my machine the UUID that is returned is 16 bytes and I cannot make 
 out any relevant numbers from the UUID key in the citystateinfo 
 table.  I've tried this in a Windows XP machine and a Windows 7 64 bit.

 Now here is the weird thing.  I did a complete backup of my database 
 and had the author restore the data on his.  It works every time 
 there.  He then did a pg_dumpall and I reloaded my db with his script 
 and while his runs all of the queries fine, mine is still not 
 returning the proper length UUID column.

 I'm using the UTF ODBC driver 8.4 and Postgres 8.4.2.


  You state the PgAdmin returns the correct result on your machine 
connecting to the database  that returns the wrong result using Data 
Conversion App, RIGHT???

If so I'm betting it has to do with ODBC driver doing something odd or 
the DATA Conversion App is doing something ODD.

What is the name of app being used to convert the data?

Which ODBC are you using ?
http://psqlodbc.projects.postgresql.org/
or
http://projects.commandprompt.com/public/odbcng/

Note: PgAdmin does not use ODBC interface .


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Scratching my head why results are different between machines.

2010-03-03 Thread Justin Graf
On 3/3/2010 5:16 PM, Michael Gould wrote:
 One thing I've noticed is that on my machines, when I install the odbc
 driver I get no error messages but when I look in the ODBC administrator I
 do not see any entry for PostGres in the drivers list.

 I do know that it somehow is working because the DMC conversion tool only
 uses ODBC and builds it's data structures based on what it gets back from
 querying the database.

 Programs like WinSQL also work and I believe it uses ODBC.

 In the past I believe I remember seeing the Postgres driver listed in the
 ODBC admin tool under the drivers tab.

 I'm running on a Windows 7 64 bit machine and I'm logged into our domain as
 the domain administrator.  Is there something else I need to do to install
 the odbc driver under windows?  I've even turned UAC off and it didn't seem
 to help


Can't help on the Windows 7  ODBC  not  appearing in the drivers list

If you can manage to configure a Data Source Name  turn on the logging

then connect using this DSN in your DMC app just do a simple select

The log will get big quick so keep the selects small and use limit.  
This will allow to see what ODBC driver is up to

open the C:\mylog_XXX and look at and you will find all the commands 
sent the Postgresql and the results

then you will look for something like this.  This will tell us what the 
Select looks like and what being sent DMC app.

*[1444-387.642]Exec_with_parameters_resolved: copying statement params: 
trans_status=1, len=51, stmt='Select user_password from mediawiki.mwuser 
limit 10'*
[1444-387.645]   stmt_with_params = 'Select user_password from 
mediawiki.mwuser limit 10'
[1444-387.646]about to begin SC_execute
[1444-387.647]   Sending SELECT statement on stmt=02C5D8C0, 
cursor_name='SQL_CUR02C5D8C0' qflag=0,1
[1444-387.648]CC_send_query: conn=04313E00, query='Select user_password 
from mediawiki.mwuser limit 10'
--snip--
*[1444-387.665]qresult: len=44, 
buffer=':B:e2e28556:8ff13b68ebc64eccad0921d1571bf08c'*
[1444-387.666]qresult: len=0, buffer=''
*[1444-387.667]qresult: len=44, 
buffer=':B:60a118ba:592cb633245cc330d48cc2e52c922f90'*
[1444-387.667]qresult: len=0, buffer=''
[1444-387.668]qresult: len=0, buffer=''
[1444-387.668]qresult: len=0, buffer=''
[1444-387.669]qresult: len=0, buffer=''
[1444-387.669]qresult: len=0, buffer=''
[1444-387.670]qresult: len=0, buffer=''






All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


[GENERAL] Source RPMs for PostgreSQL 7.4.27 on RHEL4

2010-02-16 Thread Justin Pasher

It seems like I run into this problem with every new version release.

I'm having trouble finding the latest Postgres 7.4.27 source RPMs. I was 
able to find the latest 8.1.19 source RPMS here 
(http://yum.pgsqlrpms.org/srpms/8.1/redhat/rhel-4-i386/), which I also 
need, but the 7.4 source RPMs don't exist in a similar directory 
structure (http://yum.pgsqlrpms.org/srpms/7.4/redhat/rhel-4-i386/).


Any idea where I can grab the 7.4.27 source RPMs? Thanks.

--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] windows7 login- user account

2010-02-11 Thread Justin Graf
On 2/10/2010 7:15 PM, paul e wrote:
 Before Installed postgresql Windows7 went straight to my user account.
 Now when it boots I have to go to a selection page where I choose 
 between my user account and a postgresql user account. Is there any 
 way to bypass this so it boots directly to my user account?
 
 Your E-mail and More On-the-Go. Get Windows Live Hotmail Free. Sign up 
 now. http://clk.atdmt.com/GBL/go/201469229/direct/01/

go here for instructions
http://www.howtogeek.com/howto/windows-vista/make-windows-vista-log-on-automatically/


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


Re: [GENERAL] Best way to handle multi-billion row read-only table?

2010-02-09 Thread Justin Graf
On 2/9/2010 12:47 PM, Asher wrote:
 Hello.

 I'm putting together a database to store the readings from various 
 measurement devices for later processing. Since these things (water 
 pressure monitors attached to very large water pipes) take readings at 
 200Hz and are typically deployed over multiple sites for several 
 months at a time I've got many billions of rows of data, each (at the 
 moment) with the following simple format:

 value REAL NOT NULL,
 sample_time TIMESTAMP WITH TIME ZONE NOT NULL,
 channel INTEGER REFERENCES channel(id) NOT NULL

 (Where the channel table contains metadata to identify the 
 particular sensor, data logger, etc. used to obtain the data and the 
 combination of channel and sample_time is unique.)



Well first is that 200hz  meaning 200 samples per channel per second.   
That is very fast sampling for pressure sensor,  I would be surprised if 
the meters  are actually giving real results at that rate.  I would look 
at reducing that down to what the meter is actual capable of sending  
What kind of AD card is being used as this effects what makes sense to 
record.

I would look into table partitioning
http://www.postgresql.org/docs/current/static/ddl-partitioning.html
http://wiki.postgresql.org/wiki/Table_partitioning

A one big index for such a small record will not be a big win because 
the index are going to be the same size as table.
Look into limiting the number of records each index covers.
http://www.postgresql.org/docs/8.4/static/sql-createindex.html

All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to handle multi-billion row read-only table?

2010-02-09 Thread Justin Graf
On 2/9/2010 4:41 PM, Asher Hoskins wrote:

 Thanks for that, it looks like partitioning is the way to go. I'm 
 assuming that I should try and keep my total_relation_sizes less than 
 the memory size of the machine?
This depends on what the quires look like.  As other have stated when 
partitioning you have to consider how the data is quired.



 If I partition so that each partition holds data for a single channel 
 (and set a CHECK constraint for this) then I can presumably remove the 
 channel from the index since constraint exclusion will mean that only 
 partitions holding the channel I'm interested in will be searched in a 
 query. Given that within a partition all of my sample_time's will be 
 different do you know if there's a more efficient way to index these?
Given the timestamp will most likely  be the where clause,  NO  on the 
plus side its only 8 bytes



All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Order by and strings

2010-02-08 Thread Justin Graf
On 2/8/2010 7:09 PM, Fredric Fredricson wrote:
 Hi!
 New to the list with a question that I cannot find the answer to in 
 the manual or on the internet but I suspect is trivial. If somebody 
 could point me in the correct direction I would be greatful.

 This is what I do (condensed, of course):
 # create table tmp ( x text ) ;
 CREATE TABLE
 # insert into tmp(x) values ('a'),('c'),('-b') ;
 INSERT 0 3
 # select * from tmp order by x ;
  x
 
  a
  -b
  c
 (3 rows)



It has to do with the collation you are using
see
http://www.postgresql.org/docs/8.1/interactive/charset.html


All legitimate Magwerks Corporation quotations are sent in a .PDF file 
attachment with a unique ID number generated by our proprietary quotation 
system. Quotations received via any other form of communication will not be 
honored.

CONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally 
privileged, confidential or other information proprietary to Magwerks 
Corporation and is intended solely for the use of the individual to whom it 
addresses. If the reader of this e-mail is not the intended recipient or 
authorized agent, the reader is hereby notified that any unauthorized viewing, 
dissemination, distribution or copying of this e-mail is strictly prohibited. 
If you have received this e-mail in error, please notify the sender by replying 
to this message and destroy all occurrences of this e-mail immediately.
Thank you.


[GENERAL] unsubscribe

2009-12-21 Thread Justin Alston


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Automatic truncation of character values casting to the type of a column type

2009-12-17 Thread Justin Bailey
On Wed, Dec 16, 2009 at 7:20 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Really?  Works for me, in everything back to 7.3.


I must be missing something, because this function fails:

  CREATE OR REPLACE FUNCTION insertShort()
  RETURNS VOID AS
  $BODY$
  DECLARE
s Short.shortCol%TYPE;
  BEGIN
SELECT longCol INTO s FROM Long
WHERE char_length(longCol)  20;
 INSERT INTO Short (shortCol) VALUES (s);
  END;

$BODY$
  LANGUAGE 'plpgsql' VOLATILE;

If s was automatically truncated, the insert would succeed, but it
fails with a value too long error.

p.s. I used the definitions from my prior email:

 CREATE TABLE Long (longCol varchar(40) );
 CREATE TABLE Short (shortCol varchar(20) );

INSERT INTO Long VALUES ('FOOBAR'), ('BAZ'),
(CAST('2314J1L234J21LK342JKL32J32KL4J123LK4J13L4' AS VARCHAR(40)));

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Automatic truncation of character values casting to the type of a column type

2009-12-16 Thread Justin Bailey
Greetings! I am trying to avoid the old problem of inserting a 40
character string into a 20 character field. However, I'd like to avoid
hard-coding the acceptable length (20). Is there a way to say cast to
the same type as a given column? E.g., if I have tables Long and
Short:

  CREATE TABLE Long (longCol varchar(40) )
  CREATE TABLE Short (shortCol varchar(20) )

And this data:

   INSERT INTO Long VALUES ('FOOBAR'), ('BAZ'),
(CAST('2314J1L234J21LK342JKL32J32KL4J123LK4J13L4' AS VARCHAR(40)))

Can make values inserted into shortCol have a maximum length of 20
without hard-coding that value? Something like:

  INSERT INTO Short (ShortCol)
(SELECT CAST(Long.longCol as Short.shortCol) FROM LONG)

I am using postgres 8.2.

Clearly this is a toy example. In the real world, I insert or update
values in my target table using a stored procedure. I want to
future-proof my stored procedure against the column lengths on the
target table changing. Otherwise, I have to update my sproc with new
lengths if the table ever changes. I have tried using the PL/PGSQL
feature where types can be copied in a declaration:

  DECLARE
 myVal Short.shortCol%TYPE;
  ...

But I can still put values which are too long into that variable, so
it doesn't help me. Sadly, using the same syntax in a CAST fails in
various ways:

  UPDATE Short SET shortCol = CAST(myVal AS Short.shortCol) -- schema
Short does not exist error
  UPDATE Short SET shortCol = CAST(myVal AS Short.shortCol%TYPE) -- syntax error
  UPDATE Short SET shortCol = CAST(myVal AS (Short).shortCol) -- syntax error

Thanks in advance for any advice

Justin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Automatic truncation of character values casting to the type of a column type

2009-12-16 Thread Justin Bailey
Greetings! I am trying to avoid the old problem of inserting a 40
character string into a 20 character field. However, I'd like to avoid
hard-coding the acceptable length (20). Is there a way to say cast to
the same type as a given column? E.g., if I have tables Long and
Short:

 CREATE TABLE Long (longCol varchar(40) )
 CREATE TABLE Short (shortCol varchar(20) )

And this data:

  INSERT INTO Long VALUES ('FOOBAR'), ('BAZ'),
(CAST('2314J1L234J21LK342JKL32J32KL4J123LK4J13L4' AS VARCHAR(40)))

Can make values inserted into shortCol have a maximum length of 20
without hard-coding that value? Something like:

 INSERT INTO Short (ShortCol)
       (SELECT CAST(Long.longCol as Short.shortCol) FROM LONG)

I am using postgres 8.2.

Clearly this is a toy example. In the real world, I insert or update
values in my target table using a stored procedure. I want to
future-proof my stored procedure against the column lengths on the
target table changing. Otherwise, I have to update my sproc with new
lengths if the table ever changes. I have tried using the PL/PGSQL
feature where types can be copied in a declaration:

 DECLARE
    myVal Short.shortCol%TYPE;
 ...

But I can still put values which are too long into that variable, so
it doesn't help me. Sadly, using the same syntax in a CAST fails in
various ways:

 UPDATE Short SET shortCol = CAST(myVal AS Short.shortCol) -- schema
Short does not exist error
 UPDATE Short SET shortCol = CAST(myVal AS Short.shortCol%TYPE) -- syntax error
 UPDATE Short SET shortCol = CAST(myVal AS (Short).shortCol) -- syntax error

Thanks in advance for any advice.

Justin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with plpython

2009-10-30 Thread Justin Pasher

Steve Erickson wrote:
I'm running PostgreSQL 8.3 with pl/python 8.3.  I am getting a 
different date/time format when executing the below examples.  The 
results are the same whether I use os.popen or os.system.  In 
plpython, I run:

  import os
  cmd = 'ls -al /var/log/messages  /var/tmp/log'
  x = os.popen(cmd)
  for aline in x.readlines():
  plpy.notice('aline = %s', aline)

and the contents of /var/tmp/log are -rw-r- 1 syslog adm 495523 
Oct 30 11:52 /var/log/messages


When, within Python, I run:
 cmd = 'ls -al /var/log/messages  /var/tmp/log'
 x = os.popen(cmd)
 for aline in x.readlines():
...   print aline

the contents of /var/tmp/log are -rw-r- 1 syslog adm 23591 
2009-10-30 13:03 /var/log/messages


How, using plpython, can I get the output date/time in the same format 
as when executing from within python itself?


Different locale settings will produce different formatting for the ls 
command. For example...



[just...@justinp /tmp/test]$ LANG=en_US.UTF-8 ls -l
total 0
-rw-rw-r-- 1 justinp justinp 0 2009-10-30 17:32 aa
-rw-rw-r-- 1 justinp justinp 0 2009-10-30 17:32 bb
-rw-rw-r-- 1 justinp justinp 0 2009-10-30 17:32 cc

[just...@justinp /tmp/test]$ LANG=C ls -l
total 0
-rw-rw-r-- 1 justinp justinp 0 Oct 30 17:32 aa
-rw-rw-r-- 1 justinp justinp 0 Oct 30 17:32 bb
-rw-rw-r-- 1 justinp justinp 0 Oct 30 17:32 cc


It seems to me that you can just override the LANG variable directly in 
your call to ls, since it's just running a shell command.



--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] interface for non-SQL people

2009-10-10 Thread justin

pere roca wrote:

  hi,
  some nice tool over there to let non-SQL knowing people to construct their
queries? I'm using pgAdmin III but I know some SQL. 
  there is no other option than constructing an HTML with forms, drop-down

menus...?

  thanks,
  pERE
  


Your best bet which is not free is Crystal Reports it can do  all the 
above you list. It can automate creation of  HTML, email, export Excel,  
ad hoc filters in drop downs, and text inputs,  Graphs and  all kinds of 
other stuff. 
http://www.sap.com/solutions/sapbusinessobjects/sme/reporting-dashboarding/index.epx


its designed to be easy to use for the NON have not a clue 
SQL/programmer people.


There are other software packages out there.  Note I have never used any 
of these can not comment... 
http://www.inetsoft.com/products/StyleReportEE/

http://www.pentaho.com/products/reporting/
http://www.inetsoftware.de/products/crystal-clear
http://www.actuate.com/products/
http://www.agata.org.br/
http://jasperforge.org/plugins/project/project_home.php?projectname=jasperreports

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How useful is the money datatype?

2009-10-04 Thread justin



Rich Shepard wrote:


  In the early and mid-1980s we used a procedure for business 
applications
involving money that worked regardless of programming language or 
platform.
To each (float, real) monetary amount we added 0.005 and truncated the 
result
to two digits on the right of the decimal point. In almost all cases, 
this

allowed financial calculations to be correct to the nearest penny.

  Financial calculations are still imperfect. Now and then I see this in
both my business and personal bank statements when reconciliation is 
off by
a penny or two. The transaction amounts (debits and credits) match, 
but the
bank comes out with a different total than do I. This is usually only 
for a

month or two before we are once again in agreement.

Rich


Rich what causes the difference you are referring to is method used to 
round,bankers rounding aka (round to even) vs basic rounding we are 
taught in school aka (round half up). 
http://en.wikipedia.org/wiki/Rounding


General what i do is leave more digits in the number than is needed then 
round after all the calculations are done...   A common problem 
applications/databases suffer from is inconsistent precision.  In one 
place the database is using 4 digits another 6 in another 0 and in 
another 2 digits.  Be consistent in the use of precision if not, be 
prepared to untangle a nightmare.


The money type i have found is absolutely worthless when doing math but 
using it to simplify formating great. 


select 123456789::text::money;

set session lc_monetary to 'fr_FR.UTF-8';
select 123456789::text::money


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] dump time increase by 1h with new kernel

2009-09-27 Thread Justin Pryzby
When we upgraded from linux-2.6.24 to ./linux-2.6.27, our pg_dump
duration increased by 20%.  My first attempt at resolution was to boot
with elevator=deadline.  However that's actually the default IO
scheduler in both kernels.

The two dmesg's are at:
https://www.norchemlab.com/tmp/linux-2.6.24-22.45-server
https://www.norchemlab.com/tmp/linux-2.6.27-14.41-server

The database partition is: xfs / lvm / aic79xx / scsi.

Booting back into the .24 kernel brings the pg_dump down to 5 hours
(rather than 6, for daily 20GB output compressed by pg_dump -Fc).

Does anyone know what might be different which could cause such a
drastic change?

Thanks,
Justin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't find SRPMs for PG 8.1.18 on RHEL4

2009-09-25 Thread Justin Pasher

Devrim GÜNDÜZ wrote:

On Thu, 2009-09-24 at 15:43 -0500, Justin Pasher wrote:
  
I'm having trouble finding the source RPMs for PostgreSQL 8.1.18 on 
RHEL4. I've tried looking in the following places with no luck (I can 
only find the regular RPMs).


http://yum.pgsqlrpms.org/8.1/redhat/rhel-4-i386/



...because that URL is for binary packages. Please visit here:

http://yum.pgsqlrpms.org/srpms/8.1/redhat/rhel-4-i386/repoview/postgresql.htm


Thanks. That's what I needed.

Our of curiosity, how do you actually get to that links from the web 
site? I always seem to have trouble finding the link to the SRPMs tree.



--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can't find SRPMs for PG 8.1.18 on RHEL4

2009-09-24 Thread Justin Pasher
I'm having trouble finding the source RPMs for PostgreSQL 8.1.18 on 
RHEL4. I've tried looking in the following places with no luck (I can 
only find the regular RPMs).


http://yum.pgsqlrpms.org/8.1/redhat/rhel-4-i386/
http://www.postgresql.org/ftp/binary/v8.1.18/linux/srpms/redhat/rhel-4-i386/

Any suggestions?

--
Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   3   4   >