Re: [GENERAL] Postgres 9.0.4 replication issue: FATAL: requested WAL segment 0000000100000B110000000D has already been removed

2011-11-21 Thread Ashish Gupta
Hi Tomas,

Thanks. Increasing the wal_keep_segments to 4000 did the trick.

I will set up WAL archive as well.
Thanks again.

Ashish.
On Sat, Nov 19, 2011 at 5:59 PM, Tomas Vondra  wrote:

> Hi,
>
> On 19 Listopad 2011, 10:44, Ashish Gupta wrote:
> > I searched on various forums, where people encountered similar error,
> > however in all such issues WAL file existed on Master. In this case
> Master
> > is not retaining the WAL file required by the Slave.
> >
> > I am unable to understand as to why Master is not retaining the WAL
> files.
> > Any pointer/suggestions would be helpful.
> > Thanks for attention.
>
> The cause is very simple - the standby needs all WAL segments created
> since the backup started, but the master removes some of them. There are
> two ways to fix this:
>
> 1) increase the wal_keep_segments so that enough segments is kept
>
> It seems that the slave asked for B11000D when master already created
> B11000D. That's almost 4000 segments if I'm counting correctly. That
> means your database is either quite busy or the backup takes very long
> time.
>
> This stores all the data on master, so you'll have to keep that in mind
> when planning the capacity. For example the 4000 segments are almost 64GB.
>
> 2) Set up a WAL archive - a separate instance where the WAL segments are
> kept. See how the archive_command works. And there's pg_archivecleanup for
> maintenance of the archive.
>
> Tomas
>
>


Re: [GENERAL] How to install latest stable postgresql on Debian

2011-11-21 Thread Peter Eisentraut
On sön, 2011-11-20 at 12:09 +0200, Andrus wrote:
> Debian seems to require update-rc.d and Centos chkconfig
> How to use single command for every distro ?

apt-get install chkconfig

> "/etc/init.d/postgresql start" works in all distros. Adding to
> postgresql to startup requires different commands in different
> distros ?!

PostgreSQL is started automatically on Debian.  Even the backports.  You
did something funny if it didn't work.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Design question for gurus (without going to "NoSQL")...

2011-11-21 Thread Tomas Vondra
On 21 Listopad 2011, 4:17, David Johnston wrote:
> On Nov 20, 2011, at 20:50, Phoenix Kiula  wrote:
>
>> On Mon, Nov 21, 2011 at 7:26 AM, Gavin Flower
>>  wrote:
>>
>>> How about having 2 indexes: one on each of ip & url_md5? Pg will
>>> combine the
>>> indexes as required, or will just use one if that is best.
>>
>>
>>
>> Thanks Gavin. Question: what if I have a joined index? If from a
>> joined index I only use the first column (say, "ip") will a joined
>> index still be used?
>>
>> It is cleaner to create two indexes for the two columns. Which is
>> recommended?
>>
>
> An index on (a, b) can be used for queries involving only a but not for
> those involving only b.

That is not true since 8.2 - a multi-column index may be used even for
queries without conditions on leading columns. It won't be as effective as
with conditions on leading columns, because the whole index must be
scanned, but it's usually much cheaper than keeping two indexes (memory
requirements, overhead when inserting data etc.)

Check this:
http://www.postgresql.org/docs/8.2/interactive/indexes-multicolumn.html



Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table Design question for gurus (without going to "NoSQL")...

2011-11-21 Thread Tomas Vondra
> On Nov 20, 2011, at 21:33, Phoenix Kiula  wrote:
>
> My big table now has about 70 million rows, with the following columns:
>
>  alias   | character varying(35)
>  url | text
>  modify_date | timestamp without time zone
>  ip  | bigint
>
>
> For each IP address (user of my application) I want to have a unique
> URL. So I used to have a UNIQUE constraint on IP, URL. But the index
> based on this became huge, as some URLs are gigantic. so I introduced
> an md5 of the URL:
>
>  url_md5 | varchar(32)

I don't understand this. So you want to allow exactly one URL for an IP
address? Or do you want to allow only one row with the same (IP, URL)
values? Because that's exactly what a UNIQUE index on (IP, URL) does.

If you want to allow just a single URL for an IP, you should create an
index on IP only.

> I now have two scenarios:
>
> 1. To have an index (unique?) on "(ip, url_md5)"
>
> 2. To not have an index on just the "ip". This way a query that tries
> to match   "...WHERE ip = 999 AND url_md5 = ''..." will
> still look only at the ip bit of the index, then refine it with the
> url_md5.

Have you actually performed any benchmarks with this? Just create the
indexes, run several (say 100) queries and that should give you an idea
which of the options is better.

We can speculate on the pros/cons of those options, but both are viable
under certain conditions. For example if there's just a very small number
of URLs for an IP, then #2 is probably going to be better. But if there's
enormous number of URLs per IP, then a multi-column index is probably
going to perform better.

But those are only guesses - try both options, run a few queries (not on 
the overloaded system you're struggling with - that'd skew the results)
and show us EXPLAIN ANALYZE of the queries.

> Questions:
>
> 1. Instead of md5, is there any integer hashing algorithm that will
> allow me to have a bigint column and save a lot hopefully in both
> storage space and speed?  (Some very useful points mentioned here:
> http://stackoverflow.com/questions/1422725/represent-md5-hash-as-an-integer
> )

You can obviously take an MD5 hash, which is just a 16-byte value and use
the first 4 bytes as an integer. The problem with this is that it probably
significantly increases the collision. I.e. it will indicate two URLs to
be the same, although the URLs are different.

> 2. If I do go with the above scenario #1 of a joint index, is there
> any way I can save space and maintain speed? Partitioning etc are out
> of the question.

What is the index size, anyway? You've mentioned it's huge, but what is
the actual size?

> With a growing web database, I am sure many people face this
> situation. Are nosql type databases the only sane solution to such
> massive volumes and throughput expectations (e.g., CouchDb's MemBase)?

Well, partitioning is usually a good approach to problems like these.

Have you thought about moving the URLs into a separate table? I.e. instead
of this table

 alias   | character varying(35)
 url | text
 url_md5 | character varying(32)
 modify_date | timestamp without time zone
 ip  | bigint

you'd have two tables - one for keeping the URLs:

 id  | integer
 url | text
 url_md5 | character varying(32)

and the current one, referencing the URLs

 alias   | character varying(35)
 url_id  | integer
 modify_date | timestamp without time zone
 ip  | bigint

That'd allow you to create a UNIQUE index on (ip, url_id), which should be
much smaller than the current one. But handling the inserts would be
significantly more complex (you'd have to check existence of the URL,
insert it etc.).

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] wal archiving on a hot-standby server

2011-11-21 Thread Enrico Sirola
Hello,
is it possible to archive the WAL files received by a hot-standby server? In 
noticed nothing about this on the pgsql docs. The idea is to archive logs in 
two locations, at the primary site and at the replica site (over a wan) in 
order to be able to perform a PITR also at the replica site.
Thanks a lot for your help,
Enrico


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how could duplicate pkey exist in psql?

2011-11-21 Thread Yan Chunlu
got it.  thank you very much for you help.   I found out this problem too
late, and there is no backup.

luckily there was not too much data for this, and my app keeps running
without error.

I am not sure if they are related but I could not use pg_restore to import
data dumped by "pg_dump -Fc";

pg_restore will print some error message about "duplicate primary key", and
the table is empty. no data has been imported.

pg_restore supposed to import the data and ignore the errors.does any
one have the similar problem?

On Thu, Nov 17, 2011 at 11:08 PM, Edson Richter wrote:

>  Em 17-11-2011 09:21, Yan Chunlu escreveu:
>
> I am using pgpool's replication feature, it does copy pg_xlog from one
> server to another, was that possible cause of the problem?
>
>
> I did not mean that this IS your problem, I just gave you a tip regarding
> a problem I had in the past, that eventually has same simptom.
>
> This scenario only happens when your script is copy data over own data...
> like in "rsync -ar 
> root@127.0.0.1:/var/lib/pgsql/9.0/data/*/var/lib/pgsql/9.0/data/"
>
> the command above is highly dangerous because it copies data over the
> network link over its own data... if you have transactions runing during
> the command above, you will get a crash (and, in my case, I had duplicate
> primary keys).
>
> Would be better to check if this could be happening to you... some script
> overwriting data using rsync, cp, etc... I had no other situation where
> Postgresql allowed duplicate keys.
>
> Hope this helps,
>
> Edson.
>
>
>
>
>  thanks for the help!
>
> On Thu, Nov 17, 2011 at 5:38 PM, Edson Richter wrote:
>
>>
>> Em 17-11-2011 03:19, Yan Chunlu escreveu:
>>
>>  recently I have found several tables has exactly the same pkey,  here is
>>> the definition:
>>> "diggcontent_data_account_pkey" PRIMARY KEY, btree (thing_id, key)
>>>
>>>
>>> the data is like this:
>>>
>>>   159292 | funnypics_link_point   | 41
>>>
>>>   | num
>>>   159292 | funnypics_link_point   | 40
>>>
>>>   | num
>>>
>>>
>>> I could not even update this record.
>>>
>>> really confused about how could this happen... thanks!
>>>
>>
>>  I know one scenario this can happen on Linux. In my case, it was caused
>> by a "rsync"... instead copy to a different location, script was copying
>> pg_xlog over own pg_xlog.
>>
>> I did this stupidity once, and learned for a life time. Lost two hours of
>> work to recover everything (from backup, at least I had one).
>>
>> Be careful with rsync and cp, since Linux does not block files from being
>> overwriten even when they are in use.
>>
>>
>> Regards,
>>
>> Edson.
>>
>>
>> --
>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-general
>>
>
>


Re: [GENERAL] wal archiving on a hot-standby server

2011-11-21 Thread Simon Riggs
On Mon, Nov 21, 2011 at 10:58 AM, Enrico Sirola  wrote:

> is it possible to archive the WAL files received by a hot-standby server? In 
> noticed nothing about this on the pgsql docs. The idea is to archive logs in 
> two locations, at the primary site and at the replica site (over a wan) in 
> order to be able to perform a PITR also at the replica site.
> Thanks a lot for your help,

Not directly, but you can arrange this yourself.

Cascading replication is a feature in PG 9.2, released next year.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how could duplicate pkey exist in psql?

2011-11-21 Thread Adrian Klaver
On Monday, November 21, 2011 6:39:55 am Yan Chunlu wrote:
> got it.  thank you very much for you help.   I found out this problem too
> late, and there is no backup.
> 
> luckily there was not too much data for this, and my app keeps running
> without error.
> 
> I am not sure if they are related but I could not use pg_restore to import
> data dumped by "pg_dump -Fc";
> 
> pg_restore will print some error message about "duplicate primary key", and
> the table is empty. no data has been imported.
> 
> pg_restore supposed to import the data and ignore the errors.does any
> one have the similar problem?
> 

pg_restore may ignore the error and keep on going but the database will not. In 
other words when pg_restore receives the error it will continue on to the next 
item (unless you have the -e switch on). As far as the server(database) is 
concerned duplicate primary key is still an error and the data will not be 
loaded.


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to write a own parser for full text search

2011-11-21 Thread Antonio Franzoso

Hi all,
I have installed PostgreSQL server on a Windows Server 2008 server and I 
need to write a more complex parser than the default one in PostgreSQL. 
Searching on internet i found this example:

http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/HOWTO-parser-tsearch2.html
where the parser is written in C language. I would know if can I write 
the parser in another language (Java, I'm very poor in C)? And, if I 
can, what kind of file would contain the functions of the parser (a dll 
file, a class file).
The last question is: in that example are used some functions 
(PG_GETARG_POINTER, PG_GETARG_INT32, and so on) that seem PostgreSQL 
specific functions; which functions can I use in java?
May be these are banal questions, but I'm a newbee and I don't found 
more resources on Internet! :)


Thanks in advance,
Antonio.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to write a own parser for full text search

2011-11-21 Thread John R Pierce

On 11/21/11 1:51 AM, Antonio Franzoso wrote:


I have installed PostgreSQL server on a Windows Server 2008 server and 
I need to write a more complex parser than the default one in 
PostgreSQL. Searching on internet i found this example:
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/HOWTO-parser-tsearch2.html 

where the parser is written in C language. I would know if can I write 
the parser in another language (Java, I'm very poor in C)? And, if I 
can, what kind of file would contain the functions of the parser (a 
dll file, a class file).
The last question is: in that example are used some functions 
(PG_GETARG_POINTER, PG_GETARG_INT32, and so on) that seem PostgreSQL 
specific functions; which functions can I use in java?




you'd better learn C if you want to do this.   those functions are part 
of the tsearch parser API, which is a machine level API interface, Java 
would be a huge uphill battle of fighting JNI stuff to get working, and 
then you'd be attaching the whole huge bloated JVM to the side of 
postgresql to implement this.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.1.1 build failure : postgres link fails

2011-11-21 Thread Rob Sargent
This may be a duplicate response.

On 11/20/2011 11:05 AM, Tom Lane wrote:
> Rob Sargent  writes:
>> On 11/20/2011 09:24 AM, Tom Lane wrote:
>>> It appears that on Ubuntu, libbsd defines those symbols, which confuses
>>> configure into supposing that they're provided by libc, and then the
>>> link fails because libbsd isn't actually linked into the postmaster.
>>> The question is what's pulling in libbsd though.  In the previous report
>>> it came via libedit, which you're not using.  I'd try looking in the
>>> config.log file to see what it was linking in the test that decided
>>> setproctitle was available, and then using ldd on each of those
>>> libraries to see which one(s) require libbsd.
>> Will do.  Then there's always trying Ubuntu-11?
> Couldn't say.  But re-reading this, I wonder if maybe you *are* using
> libedit.  Have you got readline installed?  If not, configure will try
> libedit as second choice ... so maybe the best fix is to install
> readline (and don't forget readline-devel or local equivalent).
>
>   regards, tom lane

I made readline-6.2 from source, than had to manually symlink
perl.so.5.10 to perl.so

and SUCCESS!

Thanks for the readline pointer Tom.  Sort of thing that would have
taken me forever to get through.

rjs


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Selective backup script

2011-11-21 Thread Mike Blackwell
I've seen a couple backup scripts that query the metadata to determine the
list of databases to back up.  I like this approach, but have a few
databases which don't get backed up for various reasons, e.g. testing
databases which we'd prefer to recreate on the off chance we loose them,
rather than have using up time/disk for backup.  Might there be a way to
tag those databases somehow so the backup script knows to skip them?  I'd
rather not hard code the list in the script.

Thoughts?

__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com



* *


Re: [GENERAL] Selective backup script

2011-11-21 Thread Thomas Kellerer

Mike Blackwell, 21.11.2011 17:50:

I've seen a couple backup scripts that query the metadata to
determine the list of databases to back up.  I like this approach,
but have a few databases which don't get backed up for various
reasons, e.g. testing databases which we'd prefer to recreate on the
off chance we loose them, rather than have using up time/disk for
backup.  Might there be a way to tag those databases somehow so the
backup script knows to skip them?  I'd rather not hard code the list
in the script.

Thoughts?


What about using the comments on the database to control this?

For those database that you need to backup, run something like:

comment on database postgres is 'do_backup';

Then in the shell script that retrieves the databases to backup, you could do 
something like this:

select db.datname
from pg_database db
  join pg_shdescription dc on dc.objoid = db.oid
where dc.description = 'do_backup';


Or if you have more database to backup than not, then maybe flipping the logic 
is also an option:

comment on database notimportant is 'no_backup';

select db.datname, dc.description
from pg_database db
  left join pg_shdescription dc on dc.objoid = db.oid
where dc.description is distinct from 'no_backup';

Hope this helps.

Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selective backup script

2011-11-21 Thread Michael Glaesemann

On Nov 21, 2011, at 11:50, Mike Blackwell wrote:

> Might there be a way to
> tag those databases somehow so the backup script knows to skip them?

Add a table to each database that can be queried by the back up script to store 
this additional metadata?

Michael Glaesemann
grzm seespotcode net




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selective backup script

2011-11-21 Thread Mike Blackwell
> What about using the comments on the database to control this?

That sounds close, though the comments are already being used for general
descriptions.  I suppose it wouldn't hurt to add 'no_backup' to the
existing comments where appropriate.  I was hoping maybe I'd missed a
'user-defined database level metadata' field somewhere. ^_^


Thanks,

__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com


Re: [GENERAL] Installed. Now what?

2011-11-21 Thread Steve Crawford

On 11/20/2011 06:21 PM, Phoenix Kiula wrote:

*SNIP*

Forgive me if I accidentally rehash something already discussed...

Divide an conquer:

First, verify that you can connect directly to your database *using 
TCP*, i.e. "psql -h 127.0.0.1 -U youruser -p 5432 yourdb". If you are 
using psql without -h for this test you will use unix sockets. We need 
to be sure that you can connect in the same way that pgbouncer will 
connect. Note that pg_hba.conf can have different settings for socket 
connections than for TCP connections and you must be sure that 
postgresql.conf is set to listen for TCP connections on localhost.


Next, your settings are too grand for learning/testing purposes. Set the 
pool size in pgbouncer to something small - perhaps 5 for testing 
purposes and make sure that you actually have 5 available connections on 
the database. Note, pgbouncer will not actually make a server connection 
until it gets a client request. But it will keep that connection open to 
serve the next request. And it won't make a second server connection 
till it actually needs two simultaneous connections so you won't see a 
sudden flood of connections when you start pgbouncer.


Now try using psql to connect to pgbouncer - again using -h 127.0.0.1 as 
you were doing. If it doesn't work, check pgbouncer's log and PostgreSQL's.


Once you get an actual working pgbouncer connection, work your settings 
upward. I'd probably start with something like 30-40 for the pool size 
and perhaps 60-80 clients.


The information in the pgbouncer pseudo-database is helpful, here (psql 
-U youradminuser -h 127.0.0.1 pgbouncer).


The "SHOW HELP;" statement will get you started but "show lists" will 
give you an idea of your utilization:

show lists;
 list  | items
---+---
 databases | 2
 users | 2
 pools | 2
 free_clients  | 0
 used_clients  | 1
 login_clients | 0
 free_servers  | 1
 used_servers  | 0

If, after watching for a while, you see you always have lots of free 
servers then you can increase the number of clients connecting to 
pg_bouncer. If you are running close to the edge or running out of 
server connections altogether, you will need to decrease clients or 
increase the pool size.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Supply Chain Calcs

2011-11-21 Thread Jason Long
I have a custom inventory system that runs on PG 9.1.  I realize this is
not a postgres specify question, but I respect the skills of the members of
this list and was hoping for some general advice.

The system is not based on any ERP and was built from scratch.

My customer requested some supply forecasting to see when there will be a
surplus or shortage of parts based on delivery dates and production dates
that will require these items.

I need to identify which items will run out and when based on current
available inventory, orders, delivery dates, and production dates.

Would someone please give me some keywords to google so I can track down a
few different ways of doing these calculations?

Or links to any examples.  I am really not trying to reinvent the wheel
here.


Re: [GENERAL] Supply Chain Calcs

2011-11-21 Thread Henry Drexler
google 'weeks of supply'

On Mon, Nov 21, 2011 at 1:18 PM, Jason Long
wrote:

> I have a custom inventory system that runs on PG 9.1.  I realize this is
> not a postgres specify question, but I respect the skills of the members of
> this list and was hoping for some general advice.
>
> The system is not based on any ERP and was built from scratch.
>
> My customer requested some supply forecasting to see when there will be a
> surplus or shortage of parts based on delivery dates and production dates
> that will require these items.
>
> I need to identify which items will run out and when based on current
> available inventory, orders, delivery dates, and production dates.
>
> Would someone please give me some keywords to google so I can track down a
> few different ways of doing these calculations?
>
> Or links to any examples.  I am really not trying to reinvent the wheel
> here.
>


Re: [GENERAL] Supply Chain Calcs

2011-11-21 Thread Peter Geoghegan
On 21 November 2011 18:18, Jason Long  wrote:
> My customer requested some supply forecasting to see when there will be a
> surplus or shortage of parts based on delivery dates and production dates
> that will require these items.

Take a look at http://en.wikipedia.org/wiki/Newsvendor_model

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Exporting 1 text column from all rows from a table to a file per row?

2011-11-21 Thread Joost Kraaijeveld
Hi,

Is it possible, and if so how, to export a single column of a table into
a separate file per row? I have a table with ~21000 rows that have a
column "body1" containing ASCII text and I want to have 21000 separate
ASCII files, each containing that column "body1". The name of the file
does not matter, although it would be nice if they had the extension
"txt".

TIA

-- 
Groeten,

Joost Kraaijeveld
Askesis B.V.
Molukkenstraat 14
6524NB Nijmegen
tel: 024-3888063 / 06-51855277
fax: 024-3608416
web: www.askesis.nl


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exporting 1 text column from all rows from a table to a file per row?

2011-11-21 Thread Thom Brown
On 21 November 2011 19:10, Joost Kraaijeveld  wrote:
> Hi,
>
> Is it possible, and if so how, to export a single column of a table into
> a separate file per row? I have a table with ~21000 rows that have a
> column "body1" containing ASCII text and I want to have 21000 separate
> ASCII files, each containing that column "body1". The name of the file
> does not matter, although it would be nice if they had the extension
> "txt".

Does the data contain newlines?  If not, you can just export it to a
single file then use:

split -l 1 exportedfile.txt

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Supply Chain Calcs

2011-11-21 Thread Jason Long
Thanks for the reply.  Weeks of Supply(WOS) is not exactly what I am
looking for, but might lead to a solution.

Here is a better description of the problem.

I know the following:

Delivery dates and quantities for items on order or in transit.
A manager will forecast manually what the pending items will be customized
with and a date this will happen.
This might very well change and will not be based on any historical
information.  There is also a 30-45 day delay in delivery since all
components come from over seas and must come via ship.
This is meant to point out where manually foretasted productions will fail
due to lack of specific parts.


On Mon, Nov 21, 2011 at 12:23 PM, Henry Drexler  wrote:

> google 'weeks of supply'
>
>
> On Mon, Nov 21, 2011 at 1:18 PM, Jason Long <
> mailing.li...@octgsoftware.com> wrote:
>
>> I have a custom inventory system that runs on PG 9.1.  I realize this is
>> not a postgres specify question, but I respect the skills of the members of
>> this list and was hoping for some general advice.
>>
>> The system is not based on any ERP and was built from scratch.
>>
>> My customer requested some supply forecasting to see when there will be a
>> surplus or shortage of parts based on delivery dates and production dates
>> that will require these items.
>>
>> I need to identify which items will run out and when based on current
>> available inventory, orders, delivery dates, and production dates.
>>
>> Would someone please give me some keywords to google so I can track down
>> a few different ways of doing these calculations?
>>
>> Or links to any examples.  I am really not trying to reinvent the wheel
>> here.
>>
>
>


Re: [GENERAL] Exporting 1 text column from all rows from a table to a file per row?

2011-11-21 Thread Jerry Sievers
Joost Kraaijeveld  writes:

> Hi,
>
> Is it possible, and if so how, to export a single column of a table
> into a separate file per row? I have a table with ~21000 rows that
> have a column "body1" containing ASCII text and I want to have 21000
> separate ASCII files, each containing that column "body1". The name
> of the file does not matter, although it would be nice if they had
> the extension "txt".

run a script like this and save to a file that you will later run sh
on...

psql -Atqf thatscript >sh_commands

create temp sequence s;

select 'cat <' || nextval('s')::text || '.txt
' || somefield || '
!'
from sometable;

> TIA
>
> -- 
> Groeten,
>
> Joost Kraaijeveld
> Askesis B.V.
> Molukkenstraat 14
> 6524NB Nijmegen
> tel: 024-3888063 / 06-51855277
> fax: 024-3608416
> web: www.askesis.nl
>
>
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 305.321.1144

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exporting 1 text column from all rows from a table to a file per row?

2011-11-21 Thread Ondrej Ivanič
Hi,

On 22 November 2011 06:10, Joost Kraaijeveld  wrote:
> Is it possible, and if so how, to export a single column of a table into
> a separate file per row? I have a table with ~21000 rows that have a
> column "body1" containing ASCII text and I want to have 21000 separate
> ASCII files, each containing that column "body1". The name of the file
> does not matter, although it would be nice if they had the extension
> "txt".

Something like this could do the job:

for i in `psql -a -t -c 'select id from T'`; do psql -a -t -c "select
body1 from T where id = $i" -o $i.txt; done;

where 'T' is your table and 'id' is primary column in that table. I've
omitted connection options (-h -U ...) from psql commands.

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Installing postgresql on windows 7

2011-11-21 Thread Twaha Daudi
Hello all,
I have tried to install windows 7 on 64bit Acer machine.Both postgresql 8.4
and 9.0 giving error;
"An error cocured executing the Microsoft VC++ runtime  installer "error
for 8.4.while for 9.0 giving me error "Unable to write inside TEMP
enveronment variable?

I have tried to check for WSH which is enabled,run as administrator without
success.
What could be the problem?
thanks in advance
huu


Re: [GENERAL] How to write a own parser for full text search

2011-11-21 Thread Gavin Flower

On 21/11/11 22:51, Antonio Franzoso wrote:

Hi all,
I have installed PostgreSQL server on a Windows Server 2008 server and 
I need to write a more complex parser than the default one in 
PostgreSQL. Searching on internet i found this example:
http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/HOWTO-parser-tsearch2.html 

where the parser is written in C language. I would know if can I write 
the parser in another language (Java, I'm very poor in C)? And, if I 
can, what kind of file would contain the functions of the parser (a 
dll file, a class file).
The last question is: in that example are used some functions 
(PG_GETARG_POINTER, PG_GETARG_INT32, and so on) that seem PostgreSQL 
specific functions; which functions can I use in java?
May be these are banal questions, but I'm a newbee and I don't found 
more resources on Internet! :)


Thanks in advance,
Antonio.





Hmm...

I am a Java developer, and Java is my favourite language.

Hiwever, I would never recommend writing this kind of thing in Java, as 
C is much more appropriate.  So if I did not know C, I would learn C for 
this.


Any serious devloper should be reasonable fluent in  AT LEAST 2 computer 
langusages, and C is quite widely used.


More impoernatly, see if you can run PostgreSQL on LInux or a variety of 
Unix, as Micosoft is not ideal as an O/S for performance and other 
reasons (to put it diplomatically).



REgards,
Gavin



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selective backup script

2011-11-21 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


> Might there be a way to tag those databases somehow so the backup 
> script knows to skip them?  I'd rather not hard code the list in 
> the script.

Give them a unique connection limit, higher than max_connections, 
e.g. ALTER DATABASE testdb1 CONNECTION LIMIT 9218;

Not as intuitive as a comment, but a lot more visible that something 
is different about the database.

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 20211715
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAk7K0CwACgkQvJuQZxSWSshN5wCgsejmvDEzo3yBz3MBRy9OGYKI
BYMAoPLDjEXOJOLh5HSWtCLHhhfNu1Zn
=AP/0
-END PGP SIGNATURE-



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Select duplicated values

2011-11-21 Thread jeffrey
Lets say that the primary key column is A.  I am trying to select all
the rows with duplicated values in columns B, C, and D.

I am not too experienced in SQL syntax, and I've used the following:
select A from table_name where B+C+D in (select B+C+D from table_name
group by B+C+D having count(*)>1  )

I'm looking for a better way, since I am just adding the three columns
together right now.

Jeffrey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] successive select statements

2011-11-21 Thread david.sahagian
In postgresql.org/docs/9.1/static/transaction-iso.html I read

13.2.1. Read Committed Isolation Level
. . . two successive SELECT commands can see different data, even though they 
are within a single transaction . . .

Please consider this code being executed by postgres:
= = = = = = = = = =
select some_int from tableX
union all
select another_int from tableY
;
= = = = = = = = = =
Are these two select statements considered "successive" when isolation == Read 
Committed ?
Or are they instead guaranteed to see the same snapshot of the database ?


Also, please consider this code being executed by postgres:
= = = = = = = = = =
CREATE Or Replace FUNCTION fx_one() RETURNS  setof integer AS
$$
select some_int from tableX;
$$ LANGUAGE 'sql';

CREATE Or Replace FUNCTION fx_two() RETURNS  setof integer AS
$$
select another_int from tableY;
$$ LANGUAGE 'sql';

select * from fx_one()
union all
select * from fx_two()
;
= = = = = = = = = =
Are the two select statements inside fx_one and fx_two considered "successive" 
when isolation == Read Committed ?
Or are they instead guaranteed to see the same snapshot of the database ?


Thanks,
-dvs-





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select duplicated values

2011-11-21 Thread Edson Richter

  
  
I think you should not "add columns", but concatenate them.

Instead

select A from table_name where B+C+D in (select B+C+D from table_name
group by B+C+D having count(*)>1  )


use "B || '/' || C || '/' || D"

select A from table_name where B || '/' || C || '/' || D in (select B || '/' || C || '/' || D from table_name
group by 1 having count(*)>1  )


You may need to apply some conversion (cast) depending of data type
and output format.

Regards,

  

  

Edson Carlos Ericksson Richter
  
  SimKorp Informática Ltda

  
  
Fone:
(51) 3366-7964
  
  
Celular:
(51)9318-9766/(51)
8585-0796
  
  

  
  

  
  


Em 21-11-2011 20:23, jeffrey escreveu:

  Lets say that the primary key column is A.  I am trying to select all
the rows with duplicated values in columns B, C, and D.

I am not too experienced in SQL syntax, and I've used the following:
select A from table_name where B+C+D in (select B+C+D from table_name
group by B+C+D having count(*)>1  )

I'm looking for a better way, since I am just adding the three columns
together right now.

Jeffrey



  



Re: [GENERAL] Select duplicated values

2011-11-21 Thread Pete Yunker
Another option is to perform a self-join on columns B, C, and D (filtering out 
the 'same' record where a=a) instead of using the sub-select.  This may yield 
better performance depending on the size of the table.  Also, I don't believe 
the concatenation / sub-select will work if all of B, C, and D are all NULL.

SELECT x.a
FROM my_table x
INNER JOIN my_table y
ON ((x.b = y.b) or (x.b IS NULL AND y.b IS NULL))
AND ((x.c = y.c) or (x.c IS NULL AND y.c IS NULL))
AND ((x.d = y.d) or (x.d IS NULL AND y.d IS NULL))
AND x.a <> y.a

Another alternative to handling the NULL values is to COALESCE them to a value 
that would never exist in columns B, C, or D.  I don't know the datatypes you 
are using, so I'll just use 'junk' for now.

SELECT x.a
FROM my_table x
INNER JOIN my_table y
ON COALESCE(x.b,'junk') = COALESCE(y.b,'junk')
AND COALESCE(x.c,'junk') = COALESCE(y.c,'junk')
AND COALESCE(x.d,'junk') = COALESCE(y.d,'junk')
AND x.a <> y.a


---
Pete Yunker
Vice President of Data Products
Home Junction, Inc.

On Nov 21, 2011, at 5:23 PM, jeffrey wrote:

> Lets say that the primary key column is A.  I am trying to select all
> the rows with duplicated values in columns B, C, and D.
> 
> I am not too experienced in SQL syntax, and I've used the following:
> select A from table_name where B+C+D in (select B+C+D from table_name
> group by B+C+D having count(*)>1  )
> 
> I'm looking for a better way, since I am just adding the three columns
> together right now.
> 
> Jeffrey
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] stored function data structures - difficulty

2011-11-21 Thread J.V.


I cannot find a way to programatically:
1.  Given a table name, find all foreign key fields in the given 
table by field name (column name)
2.  Given a single foreign key field name, programatically look up 
the corresponding reference table name and the reference primary key field


so have thought of simply hard coding this (for 100 tables).

What Kind of data structure could I use that is available to me that 
would hold as the key the table name and from there be able to iterate 
through and get all foreign keys by field name and from there another 
inner loop that would give me another key/value pair of the table name 
(key) and the primary key (value) that corresponds to that foreign key?


I want to hard code all of this information into a data structure and 
iterate through at some point in another function.


Instead of discovering this programatically, I can manually look at each 
table / schema diagram and hard code it, but I really need one super 
structure that will hold as keys every table in the schema and be able 
to drill down that that tables foreign keys and from there further drill 
down to get the table name, primary key field in that table.


I have seen a number of structures that might work, but cannot find an 
example on how to actually use for what I need to do.  If you do have an 
idea of a structure, it would be great and awesome if I could be pointed 
to an actual working example that I could test in a sandbox first to 
understand how it works.


thanks


J.V.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored function data structures - difficulty

2011-11-21 Thread Mike Blackwell
Take a look at
http://www.postgresql.org/docs/9.1/interactive/information-schema.html and
http://www.postgresql.org/docs/9.1/interactive/catalogs.html.  I think
you'll find what you need.  The former is relatively stable between
releases, while the latter has more detail but is subject to change.

I have use the system catalogs for several one time projects related to
foreign keys, including checking which fks have associated indexes defined.

__
*Mike Blackwell | Technical Analyst, Distribution Services/Rollout
Management | RR Donnelley*
1750 Wallace Ave | St Charles, IL 60174-3401
Office: 630.313.7818
mike.blackw...@rrd.com
http://www.rrdonnelley.com


 
* *


On Mon, Nov 21, 2011 at 17:30, J.V.  wrote:

>
> I cannot find a way to programatically:
>1.  Given a table name, find all foreign key fields in the given table
> by field name (column name)
>2.  Given a single foreign key field name, programatically look up the
> corresponding reference table name and the reference primary key field
>
> so have thought of simply hard coding this (for 100 tables).
>
> What Kind of data structure could I use that is available to me that would
> hold as the key the table name and from there be able to iterate through
> and get all foreign keys by field name and from there another inner loop
> that would give me another key/value pair of the table name (key) and the
> primary key (value) that corresponds to that foreign key?
>
> I want to hard code all of this information into a data structure and
> iterate through at some point in another function.
>
> Instead of discovering this programatically, I can manually look at each
> table / schema diagram and hard code it, but I really need one super
> structure that will hold as keys every table in the schema and be able to
> drill down that that tables foreign keys and from there further drill down
> to get the table name, primary key field in that table.
>
> I have seen a number of structures that might work, but cannot find an
> example on how to actually use for what I need to do.  If you do have an
> idea of a structure, it would be great and awesome if I could be pointed to
> an actual working example that I could test in a sandbox first to
> understand how it works.
>
> thanks
>
>
> J.V.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


Re: [GENERAL] how could duplicate pkey exist in psql?

2011-11-21 Thread Yan Chunlu
and database will stop receiving the following data after detected an
error?
that means while using pg_restore, no error allowed to happen, otherwise
the database will stop receiving data and the import will fail.

I found only one record in psql's log:

 duplicate key value violates unique constraint "account_pkey"

does that means one duplicate record will prevent all other records to
import?

On Mon, Nov 21, 2011 at 10:55 PM, Adrian Klaver wrote:

> On Monday, November 21, 2011 6:39:55 am Yan Chunlu wrote:
> > got it.  thank you very much for you help.   I found out this problem too
> > late, and there is no backup.
> >
> > luckily there was not too much data for this, and my app keeps running
> > without error.
> >
> > I am not sure if they are related but I could not use pg_restore to
> import
> > data dumped by "pg_dump -Fc";
> >
> > pg_restore will print some error message about "duplicate primary key",
> and
> > the table is empty. no data has been imported.
> >
> > pg_restore supposed to import the data and ignore the errors.does any
> > one have the similar problem?
> >
>
> pg_restore may ignore the error and keep on going but the database will
> not. In
> other words when pg_restore receives the error it will continue on to the
> next
> item (unless you have the -e switch on). As far as the server(database) is
> concerned duplicate primary key is still an error and the data will not be
> loaded.
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] Select duplicated values

2011-11-21 Thread David Johnston
On Nov 21, 2011, at 17:23, jeffrey  wrote:

> Lets say that the primary key column is A.  I am trying to select all
> the rows with duplicated values in columns B, C, and D.
> 
> I am not too experienced in SQL syntax, and I've used the following:
> select A from table_name where B+C+D in (select B+C+D from table_name
> group by B+C+D having count(*)>1  )
> 
> I'm looking for a better way, since I am just adding the three columns
> together right now.
> 
> Jeffrey
> 

Do you really want these to evaluate to equal (B, C, D)?

(1, 0, 1) = (0, 1, 1); they both sum to 2 but both B and C are unequal

There is no need to do anything other than list each field individually in both 
the SELECT and the GROUP BY.

SELECT B, C, D
FROM table
GROUP BY B, C, D
HAVING count(*) > 1;

Whatever possessed you to consider that you had to add them to get what you 
need? (assuming you don't want my examples to be considered equal) Besides the 
fact it will give you WRONG RESULTS any decent set of GROUP BY examples will 
show you that you can list/use multiple fields in a grouping query.

Now, if you truly want the two samples above to evaluate to equal then you do 
need to do some form of consolidation (like the adding in your example).  That, 
however, would be very unusual.

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select duplicated values

2011-11-21 Thread David Johnston
On Nov 21, 2011, at 21:11, David Johnston  wrote:

> On Nov 21, 2011, at 17:23, jeffrey  wrote:
> 
>> Lets say that the primary key column is A.  I am trying to select all
>> the rows with duplicated values in columns B, C, and D.
>> 
>> I am not too experienced in SQL syntax, and I've used the following:
>> select A from table_name where B+C+D in (select B+C+D from table_name
>> group by B+C+D having count(*)>1  )
>> 
>> I'm looking for a better way, since I am just adding the three columns
>> together right now.
>> 
>> Jeffrey
>> 
> 
> Do you really want these to evaluate to equal (B, C, D)?
> 
> (1, 0, 1) = (0, 1, 1); they both sum to 2 but both B and C are unequal
> 
> There is no need to do anything other than list each field individually in 
> both the SELECT and the GROUP BY.
> 
> SELECT B, C, D
> FROM table
> GROUP BY B, C, D
> HAVING count(*) > 1;
> 
> Whatever possessed you to consider that you had to add them to get what you 
> need? (assuming you don't want my examples to be considered equal) Besides 
> the fact it will give you WRONG RESULTS any decent set of GROUP BY examples 
> will show you that you can list/use multiple fields in a grouping query.
> 
> Now, if you truly want the two samples above to evaluate to equal then you do 
> need to do some form of consolidation (like the adding in your example).  
> That, however, would be very unusual.
> 
> David J.
> 
> 

So, in a sub-select you would do:

... WHERE ROW(B, C, D) IN (SELECT B, C, D FROM ... GROUP BY B, C, D)

Note that the word ROW is optional ( but not the parentheses )

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] successive select statements

2011-11-21 Thread Tom Lane
 writes:
> In postgresql.org/docs/9.1/static/transaction-iso.html I read
> 13.2.1. Read Committed Isolation Level
> . . . two successive SELECT commands can see different data, even though they 
> are within a single transaction . . .

> Please consider this code being executed by postgres:
> = = = = = = = = = =
> select some_int from tableX
> union all
> select another_int from tableY
> ;
> = = = = = = = = = =
> Are these two select statements considered "successive" when isolation == 
> Read Committed ?

No, the UNION is a single command.  The fact that the word "SELECT"
occurs twice within it is a SQL syntactic artifact --- it doesn't
convert it into two commands.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how could duplicate pkey exist in psql?

2011-11-21 Thread Adrian Klaver
On Monday, November 21, 2011 4:53:21 pm Yan Chunlu wrote:
> and database will stop receiving the following data after detected an
> error?
> that means while using pg_restore, no error allowed to happen, otherwise
> the database will stop receiving data and the import will fail.
> 
> I found only one record in psql's log:
> 
>  duplicate key value violates unique constraint "account_pkey"
> 
> does that means one duplicate record will prevent all other records to
> import?

For that table yes. Though if that table is the parent in FK relationships with 
other tables, those tables will fail to import also because the keys they refer 
to do not exist. 

To get around this you have several options:
1) Find the duplicate entry(s) in the original table and eliminate them before 
dumping.
2) Dump the table by itself to a plain text format and eliminate the 
duplicate(s) in the plain text file before restoring.
3) By default pg_dump uses COPY to load data into tables. As you have found out 
that runs as a single transaction and rollbacks if there is an error. You can 
specify --insert to the pg_dump command to get it to output INSERT(s) for each 
row. The up side is each INSERT is a separate transaction. The down side is if 
there is a lot of data it will take a long time to load because each INSERT is 
a 
separate transaction.
4) Use pgloader (http://pgfoundry.org/projects/pgloader/). It is a Python 
program that 'manages' COPY. It will kick out bad rows and keep loading data.


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Explicitly inserting NULL values into NOT NULL DEFAULT 0 columns

2011-11-21 Thread Tanmay Patel
If I insert a NULL value explicitly into a column declared to be NOT NULL
DEFAULT 0 in postgreSQL 8.4 the column ends up with the default value. If I
do the same in postgreSQL 9.0 I get an error about how I am inserting a
null value into a NOT NULL column.

i.e.: insert into table1 (column1, column2) values (0, NULL); where column2
is of type integer with attributes NOT NULL DEFAULT 0

In both cases if I just don't mention the column with these attributes the
value stored is the default value.

i.e.: insert into table1(column1) values (0); where column2 is of type
integer with attributes NOT NULL DEFAULT 0

I looked through all the release notes between the versions in question and
can find nothing mentioning this change. When did this change occur, and
can I choose to keep the behavior as it was in postgreSQL 8.4?

Thanks,

Tanmay 


Re: [GENERAL] stored function data structures - difficulty

2011-11-21 Thread Pavel Stehule
Hello

2011/11/22 J.V. :
>
> I cannot find a way to programatically:
>    1.  Given a table name, find all foreign key fields in the given table by
> field name (column name)
>    2.  Given a single foreign key field name, programatically look up the
> corresponding reference table name and the reference primary key field
>
> so have thought of simply hard coding this (for 100 tables).
>
> What Kind of data structure could I use that is available to me that would
> hold as the key the table name and from there be able to iterate through and
> get all foreign keys by field name and from there another inner loop that
> would give me another key/value pair of the table name (key) and the primary
> key (value) that corresponds to that foreign key?
>
> I want to hard code all of this information into a data structure and
> iterate through at some point in another function.
>
> Instead of discovering this programatically, I can manually look at each
> table / schema diagram and hard code it, but I really need one super
> structure that will hold as keys every table in the schema and be able to
> drill down that that tables foreign keys and from there further drill down
> to get the table name, primary key field in that table.
>
> I have seen a number of structures that might work, but cannot find an
> example on how to actually use for what I need to do.  If you do have an
> idea of a structure, it would be great and awesome if I could be pointed to
> an actual working example that I could test in a sandbox first to understand
> how it works.
>

psql has a nice featute, that can help with orientation in system catalog

if I need a query, that describe a some database object, I need to
know a adequate psql meta statement. You have to run psql with -E
param, and then psql shows a queries that was necessary for processing
a statement

[pavel@nemesis ~]$ psql -E postgres
psql (9.2devel)
Type "help" for help.

postgres=# \d a1
* QUERY **
SELECT c.oid,
  n.nspname,
  c.relname
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relname ~ '^(a1)$'
  AND pg_catalog.pg_table_is_visible(c.oid)
ORDER BY 2, 3;
**

* QUERY **
SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules,
c.relhastriggers, c.relhasoids, '', c.reltablespace, CASE WHEN
c.reloftype = 0 THEN '' ELSE
c.reloftype::pg_catalog.regtype::pg_catalog.text END, c.relpersistence
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)
WHERE c.oid = '146989';
**

* QUERY **
SELECT a.attname,
  pg_catalog.format_type(a.atttypid, a.atttypmod),
  (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128)
   FROM pg_catalog.pg_attrdef d
   WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),
  a.attnotnull, a.attnum,
  (SELECT c.collname FROM pg_catalog.pg_collation c, pg_catalog.pg_type t
   WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND
a.attcollation <> t.typcollation) AS attcollation,
  NULL AS indexdef,
  NULL AS attfdwoptions
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = '146989' AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;
**

* QUERY **
SELECT c2.relname, i.indisprimary, i.indisunique, i.indisclustered,
i.indisvalid, pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),
  pg_catalog.pg_get_constraintdef(con.oid, true), contype,
condeferrable, condeferred, c2.reltablespace
FROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i
  LEFT JOIN pg_catalog.pg_constraint con ON (conrelid = i.indrelid AND
conindid = i.indexrelid AND contype IN ('p','u','x'))
WHERE c.oid = '146989' AND c.oid = i.indrelid AND i.indexrelid = c2.oid
ORDER BY i.indisprimary DESC, i.indisunique DESC, c2.relname;
**

* QUERY **
SELECT conname,
  pg_catalog.pg_get_constraintdef(r.oid, true) as condef
FROM pg_catalog.pg_constraint r
WHERE r.conrelid = '146989' AND r.contype = 'f' ORDER BY 1;
**

* QUERY **
SELECT conname, conrelid::pg_catalog.regclass,
  pg_catalog.pg_get_constraintdef(c.oid, true) as condef
FROM pg_catalog.pg_constraint c
WHERE c.confrelid = '146989' AND c.contype = 'f' ORDER BY 1;
**

* QUERY **
SELECT t.tgname, pg_catalog.pg_get_triggerdef(t.oid, true), t.tgenabled
FROM pg_catalog.pg_trigger t
WHERE t.tgrelid = '146989' AND NOT t.tgisinternal
ORDER BY 1;
**

* QUERY **
SELECT c.oid::pg_catalog.regclass FROM pg_catalog.pg_class c,
pg_catalog.pg_inherits i WHERE c.oid=i.inhparent AND i.inhrelid =
'146989' ORDER BY inhseqno;
**

* QUERY **
SELECT c.oid::pg_catalog.regclass FROM pg_catalog.pg_class c,
pg_catalog.pg_inherits i WHERE c.oid=i.inhrelid AND i.inhparent =
'146989' ORDER BY c.oid::pg

Re: [GENERAL] Explicitly inserting NULL values into NOT NULL DEFAULT 0 columns

2011-11-21 Thread Tom Lane
Tanmay Patel  writes:
> If I insert a NULL value explicitly into a column declared to be NOT NULL
> DEFAULT 0 in postgreSQL 8.4 the column ends up with the default value. If I
> do the same in postgreSQL 9.0 I get an error about how I am inserting a
> null value into a NOT NULL column.

I'm sorry, but you're quite mistaken about the behavior of 8.4.  Every
version of Postgres would reject this; no version has ever considered an
explicit specification of NULL to be an invitation to insert the
column's default value instead.  (I have heard that mysql acts that way,
though.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Explicitly inserting NULL values into NOT NULL DEFAULT 0 columns

2011-11-21 Thread Scott Marlowe
On Mon, Nov 21, 2011 at 5:27 PM, Tanmay Patel  wrote:
> If I insert a NULL value explicitly into a column declared to be NOT NULL
> DEFAULT 0 in postgreSQL 8.4 the column ends up with the default value. If I
> do the same in postgreSQL 9.0 I get an error about how I am inserting a null
> value into a NOT NULL column.

As Tom pointed out you are mistaken.  That's a MySQLism.  If you want
to insert defaults, use the DEFAULT keyword in place of where you're
trying to put NULL.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing postgresql on windows 7

2011-11-21 Thread Ashesh Vashi
On Tue, Nov 22, 2011 at 3:36 AM, Twaha Daudi  wrote:

> Hello all,
> I have tried to install windows 7 on 64bit Acer machine.Both postgresql
> 8.4 and 9.0 giving error;
> "An error cocured executing the Microsoft VC++ runtime  installer "error
> for 8.4.while for 9.0 giving me error "Unable to write inside TEMP
> enveronment variable?
>
Looks like your TEMP environment variable is not properly set.
Can you please tell me what is the output of this command: "echo %TEMP%"?

If it is not properly set, please set it to
"%USERPROFILE%\AppData\Local\Temp".
Then try to run the installer.

--
Thanks & Regards,

Ashesh Vashi
EnterpriseDB INDIA: Enterprise PostgreSQL Company


> I have tried to check for WSH which is enabled,run as administrator
> without success.
> What could be the problem?
> thanks in advance
> huu
>


[GENERAL] Using KNN for objects that have more than 2 dimensions?

2011-11-21 Thread Benjamin Arai, Ph.D.
Hello,

I have a data sets where each of the objects is represented in a metric
space with 32 dimensions (i.e., each object is represented by 32 numbers).
Is there a way to represent this object in Postgresql so that I can perform
KNN?

Thanks,
Benjamin


[GENERAL] [general] rsync'd database requires reindex - why ?

2011-11-21 Thread marcin kowalski
Hi list.

I'm migrating a bunch of old suse 9.3 systems with postgresql 8.2 databases
to opensuse 11.4 systems with 8.2 databases (the exact same version -
8.2.14). From there, the databases will be migrated to postgresql 9.x with
custom process.

Let's assume that 9.3 machine is machine A, and new one is machine B.

For now, i'm simply stopping postgresql on both machines, and rsync-ing the
entire data directory from A to B. And then starting postgresql on the new
one.

I do it with rsync -azvr --delete parameters.

New machine has only bare minimum databases running prior to migration -
template0, template1, postgres. Which means that any extra user database
has its directory copied over from scratch. Postgresql starts up without
any issues after transfer is finished.

However, when running the new instance, it seems indexes are quite botched,
e.g.

"select * from table where primary_key_column='somevalue';" returns
nothing.

when doing plain "select * from table;" i can see the row that should have
been returned by the previous query.

Reindexing all databases seems to resolve the problem.

This is very reproductible, as i have lots of machines to migrate and i've
hit this problem at least three times now. For now, i'm just reindexing
databases after i've done rsyncing them, but is that behavior expected ?

Both machines have postgresql-8.2.14 built by hand, with the same
configuration and with contrib installed.


Re: [GENERAL] Installing postgresql on windows 7

2011-11-21 Thread Ashesh Vashi
On Tue, Nov 22, 2011 at 1:05 PM, Twaha Daudi  wrote:

> Hello Ashesh,
> here is the output of the command:
> C:\>echo %TEMP%
> C:\Users\User\AppData\Local\Temp
>
> It looks like the variable is set properly and still there is problem
> Any help?
>
>
> On Tue, Nov 22, 2011 at 7:13 AM, Ashesh Vashi <
> ashesh.va...@enterprisedb.com> wrote:
>
>>
>> On Tue, Nov 22, 2011 at 3:36 AM, Twaha Daudi  wrote:
>>
>>> Hello all,
>>> I have tried to install windows 7 on 64bit Acer machine.Both postgresql
>>> 8.4 and 9.0 giving error;
>>> "An error cocured executing the Microsoft VC++ runtime  installer "error
>>> for 8.4.while for 9.0 giving me error "Unable to write inside TEMP
>>> enveronment variable?
>>>
>> Looks like your TEMP environment variable is not properly set.
>> Can you please tell me what is the output of this command: "echo %TEMP%"?
>>
>> If it is not properly set, please set it to
>> "%USERPROFILE%\AppData\Local\Temp".
>> Then try to run the installer.
>>
> Do you have read/write permission in that directory?

--

Thanks & Regards,

Ashesh Vashi
EnterpriseDB INDIA: Enterprise PostgreSQL Company



*http://www.linkedin.com/in/asheshvashi*


>
>> --
>> Thanks & Regards,
>>
>> Ashesh Vashi
>> EnterpriseDB INDIA: Enterprise PostgreSQL 
>> Company
>>
>>
>>> I have tried to check for WSH which is enabled,run as administrator
>>> without success.
>>> What could be the problem?
>>> thanks in advance
>>> huu
>>>
>>
>>
>


Re: [GENERAL] Using KNN for objects that have more than 2 dimensions?

2011-11-21 Thread John R Pierce

On 11/21/11 11:20 PM, Benjamin Arai, Ph.D. wrote:
I have a data sets where each of the objects is represented in a 
metric space with 32 dimensions (i.e., each object is represented by 
32 numbers). Is there a way to represent this object in Postgresql so 
that I can perform KNN?




Would an array of 32 integers or doubles be suitable for this?

Whats KNN ?



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general