Re: [GENERAL] Date data type

2008-11-02 Thread A. Kretschmer
am  Mon, dem 03.11.2008, um 16:03:33 +0930 mailte Mike Hall folgendes:
> Gday,
> 
> I'm currently converting an MS Access database to PostgreSQL (version 8.1 as 
> it is the vesion that ships with CentOS 5).
> 
> I'm having trouble with an INSERT statement attempting to insert an empty 
> value ('') into a field with data type DATE. This produces the following 
> error message:
> 
> ERROR: invalid input syntax for type date: ""

An empty string isn't a valid date. Use NULL instead.


> 
> It does the same whether the field has a NOT NULL restraint or not.

Doesn't matter.


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Date data type

2008-11-02 Thread Klint Gore

Mike Hall wrote:

Gday,

I'm currently converting an MS Access database to PostgreSQL (version 8.1 as it 
is the vesion that ships with CentOS 5).

I'm having trouble with an INSERT statement attempting to insert an empty value 
('') into a field with data type DATE. This produces the following error 
message:

ERROR: invalid input syntax for type date: ""

It does the same whether the field has a NOT NULL restraint or not.

I can't find anything relevant in the documentation.

What am I doing wrong.
  


You're trying to put the empty string into a date field
insert into atable (date_col) values ('');  -- syntax error
insert into atable (date_col) values (null);  -- works

klint.

--
Klint Gore
Database Manager
Sheep CRC
A.G.B.U.
University of New England
Armidale NSW 2350

Ph: 02 6773 3789  
Fax: 02 6773 3266

EMail: [EMAIL PROTECTED]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Date data type

2008-11-02 Thread Mike Hall
Gday,

I'm currently converting an MS Access database to PostgreSQL (version 8.1 as it 
is the vesion that ships with CentOS 5).

I'm having trouble with an INSERT statement attempting to insert an empty value 
('') into a field with data type DATE. This produces the following error 
message:

ERROR: invalid input syntax for type date: ""

It does the same whether the field has a NOT NULL restraint or not.

I can't find anything relevant in the documentation.

What am I doing wrong.

Thanks


Michael Hall
IT Communications Officer
Alice Springs Town Council
[EMAIL PROTECTED]
(08) 8950 0561

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance of views

2008-11-02 Thread Scott Marlowe
On Sun, Nov 2, 2008 at 7:40 PM, Martin Gainty <[EMAIL PROTECTED]> wrote:
> does anyone  know if postgres support 'refresh' of applicable index(es) of a
> materialized view on refresh?

Postgresql has no built in support for materialized views.  If you
follow the excellent tutorial on how to make your own, you window up
creating real tables to hold the data, and any indexes you create on
those tables will be dynamically updated when the materialized view is
updated.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance of views

2008-11-02 Thread Martin Gainty

does anyone  know if postgres support 'refresh' of applicable index(es) of a 
materialized view on refresh?

Thanks,
Martin 
__ 
Disclaimer and confidentiality note 
Everything in this e-mail and any attachments relates to the official business 
of Sender. This transmission is of a confidential nature and Sender does not 
endorse distribution to any party other than intended recipient. Sender does 
not necessarily endorse content contained within this transmission. 


> To: [EMAIL PROTECTED]
> CC: [EMAIL PROTECTED]; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Performance of views 
> Date: Sun, 2 Nov 2008 21:22:24 -0500
> From: [EMAIL PROTECTED]
> 
> "Scott Marlowe" <[EMAIL PROTECTED]> writes:
> > Note that, at least in older versions, MySQL completely materialized a
> > temporary table from a view, then used that for the view.  This is
> > horribly inefficient, and results in a lot of people thinking views
> > are slow.  Not sure if this has been addressed in MySQL yet, don't
> > really care anymore, since I rarely use mysql for anything anymore.
> 
> Some simple experiments with mysql 5.0.67 suggest that this meme is
> obsolete there too.  I found some cases where it looks like we optimize
> a bit better than they do, but for simple views you seem to get the
> same plan as if you'd written out the equivalent query in-line.
> 
>   regards, tom lane
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

_
When your life is on the go—take your life with you.
http://clk.atdmt.com/MRT/go/115298558/direct/01/

Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-11-02 Thread Scott Marlowe
On Sun, Nov 2, 2008 at 7:19 PM, Sam Mason <[EMAIL PROTECTED]> wrote:
> On Mon, Nov 03, 2008 at 10:01:31AM +0900, Craig Ringer wrote:
>> So - it's potentially even worth compressing the wire protocol for use
>> on a 100 megabit LAN if a lightweight scheme like LZO can be used.
>
> The problem is that then you're then dedicating most of a processor to
> doing the compression, one that would otherwise be engaged in doing
> useful work for other clients.

Considering the low cost of gigabit networks nowadays (even my old T42
thinkpad that's 4 years old has gigabit in it) it would be cheaper to
buy gig nics and cheap switches than to worry about the network
component most the time.  On Wans it's another story of course.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance of views

2008-11-02 Thread Nikolas Everett
We've been toying around with reworking our years old database schema and
replacing the old tables with updatable views into the new schema.  The only
real problem we've had with it is that queries to one of our views seem to
be joining on unnecessary tables because the view does the join.  We don't
need the columns provided by the join and the join is kind of costly, but
performance has been great otherwise.
On Sun, Nov 2, 2008 at 8:59 PM, Scott Marlowe <[EMAIL PROTECTED]>wrote:

> On Sun, Nov 2, 2008 at 6:39 PM, Stephen Frost <[EMAIL PROTECTED]> wrote:
> > Simon,
> >
> >>   * Higher overhead mapping to original tables and indexes
> >
> > This just plain isn't true in PG, at least, and I'd think most other
> > sensible databases..
>
> Note that, at least in older versions, MySQL completely materialized a
> temporary table from a view, then used that for the view.  This is
> horribly inefficient, and results in a lot of people thinking views
> are slow.  Not sure if this has been addressed in MySQL yet, don't
> really care anymore, since I rarely use mysql for anything anymore.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Performance of views

2008-11-02 Thread Tom Lane
"Scott Marlowe" <[EMAIL PROTECTED]> writes:
> Note that, at least in older versions, MySQL completely materialized a
> temporary table from a view, then used that for the view.  This is
> horribly inefficient, and results in a lot of people thinking views
> are slow.  Not sure if this has been addressed in MySQL yet, don't
> really care anymore, since I rarely use mysql for anything anymore.

Some simple experiments with mysql 5.0.67 suggest that this meme is
obsolete there too.  I found some cases where it looks like we optimize
a bit better than they do, but for simple views you seem to get the
same plan as if you'd written out the equivalent query in-line.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-11-02 Thread Sam Mason
On Mon, Nov 03, 2008 at 10:01:31AM +0900, Craig Ringer wrote:
> Sam Mason wrote:
> >Your lzop numbers look *very* low; the paper suggests
> >compression going up to ~0.3GB/s on a 2GHz Opteron.
> 
> Er ... ENOCOFFEE? . s/Mb(it)?/MB/g . And I'm normally *so* careful about 
> Mb/MB etc; this was just a complete thinko at some level. My apologies, 
> and thanks for catching that stupid error.

Nice to know we're all human here :)

> The paragraph should've read:
> 
> I get 19 MB/s (152 Mb/s) from gzip (deflate) on my 2.4GHz Core 2 Duo 
> laptop. With lzop (LZO) the machine achieves 45 MB/s (360 Mb/s). In both 
> cases only a single core is used. With 7zip (LZMA) it only manages 3.1 
> MB/s (24.8 Mb/s) using BOTH cores together.

Hum, I've just had a look and found that Debian has a version of a lzop
compression program.  I uncompressed a copy of the Postgres source for
a test and I'm getting around 120MBs when compressing on a 2.1GHz Core2
processor (72MB in 0.60 seconds, fast mode).  If I save the output
and recompress it I get about 40MB/s (22MB in 0.67 seconds), so the
compression rate seems to be very dependent on the type of data.  As
a test, I've just written some code that writes out (what I guess the
"LINENUMBER" test is in the X100 paper) a file consisting of small
integers (less than 2 decimal digits, i.e. lots of zero bytes) and
now get up to 0.4GB/s (200MB in 0.5 seconds), which nicely matches my
eyeballing of the figure in the paper.

It does point out that compression rates seem to be very data dependent!

> So - it's potentially even worth compressing the wire protocol for use 
> on a 100 megabit LAN if a lightweight scheme like LZO can be used.

The problem is that then you're then dedicating most of a processor to
doing the compression, one that would otherwise be engaged in doing
useful work for other clients.


BTW, the X100 work was about trying to become less IO bound; they had
a 350MB/s RAID array and were highly IO bound.  If I'm reading the
paper right, with their PFOR algorithm they got the final query (i.e.
decompressing and doing useful work) running at 500MB/s.


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance of views

2008-11-02 Thread Scott Marlowe
On Sun, Nov 2, 2008 at 6:39 PM, Stephen Frost <[EMAIL PROTECTED]> wrote:
> Simon,
>
>>   * Higher overhead mapping to original tables and indexes
>
> This just plain isn't true in PG, at least, and I'd think most other
> sensible databases..

Note that, at least in older versions, MySQL completely materialized a
temporary table from a view, then used that for the view.  This is
horribly inefficient, and results in a lot of people thinking views
are slow.  Not sure if this has been addressed in MySQL yet, don't
really care anymore, since I rarely use mysql for anything anymore.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-11-02 Thread Craig Ringer

Tom Lane wrote:


Wire protocol compression support in PostgreSQL would probably still be
extremely useful for Internet or WAN based clients, though,


Use an ssh tunnel ... get compression *and* encryption, which you surely
should want on a WAN link.


An ssh tunnel, while very useful, is only suitable for more capable 
users and is far from transparent. It requires an additional setup step 
before connection to the database that's going to cause support problems 
and confuse users. It's also somewhat painful on Windows machines. 
Additionally, use of an SSH tunnel makes recovery after a connection is 
broken much, MUCH more difficult for an application to handle 
transparently automatically.


As you know, PostgreSQL supports SSL/TLS for encryption of wire 
communications, and you can use client certificates as an additional 
layer of authentication much as you can use an ssh key. It's clean and 
to the end user it's basically transparent. All the major clients, like 
the ODBC and JDBC drivers, already support it. Adding optional 
compression within that would be wonderful - and since the client and 
server are already designed to communicate through filters (for 
encryption) it shouldn't be that hard to stack another filter layer on top.


It's something I'm going to have to look at myself, actually, though I 
have some work on the qemu LSI SCSI driver that I *really* have to 
finish first.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance of views

2008-11-02 Thread Stephen Frost
Simon,

* Simon Windsor ([EMAIL PROTECTED]) wrote:
> Generally, I have avoided using VIEWS within application code and only  
> used them for client interfaces, the sole reason being the performance  
> of views against tables.

Views really shouldn't have a large impact on overall performance.  In
PostgreSQL, that's even more true.  In general, I would strongly
recommend moving complex queries from your application into views in the
database.  The performance difference really should be minimal, while
the maintainability is improved.

>   * Data within a view is not necessary sequential, unlike a table

I've got no idea what you're talking about here, to be honest.  Pulling
data out of a table has no guarenteed ordering to it unless you
explicitly ask for one, and you can do that in a view too.

>   * Higher overhead mapping to original tables and indexes

This just plain isn't true in PG, at least, and I'd think most other
sensible databases..

>   * Danger of linking views and tables and not utilising utilising
> underlying tables properly.

If you push your complicated queries into your views and keep the
application code simpler, identifying and fixing performance or other
issues often becomes easier, and easier to fix..

> Am I right to avoid to VIEWS within application code?

No.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-11-02 Thread Tom Lane
Craig Ringer <[EMAIL PROTECTED]> writes:
> I get 19 Mbit/s from gzip (deflate) on my 2.4GHz Core 2 Duo laptop. With
> lzop (LZO) the machine achieves 45 Mbit/s. In both cases only a single
> core is used. With 7zip (LZMA) it only manages 3.1 Mb/s using BOTH cores
> together.

It'd be interesting to know where pg_lzcompress fits in.

> Wire protocol compression support in PostgreSQL would probably still be
> extremely useful for Internet or WAN based clients, though,

Use an ssh tunnel ... get compression *and* encryption, which you surely
should want on a WAN link.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question, re: Running enormous batch file from the command line

2008-11-02 Thread Craig Ringer

Milarsky, Jeremy F. wrote:


I have, for the record, attempting to get all 10 million rows in with a
single COPY command. This method has not been successful, apparently due
to client encoding (despite executing several different "set
client_encoding" in each attempt).


This is probably something you should investigate, rather than work 
around, as it may indicate that you're loading incorrectly encoded data 
into your DB.


What is your database encoding? What encoding does the CSV dump from the 
DBF files use? What is your system's default text encoding? What 
encoding were you telling psql to use?



Personally, to fix this I'd start by using the wonderful `iconv' tool to 
convert the CSV data from its original encoding (probably one of the 
WIN- codepages, but you need to find out) into UTF-8. If you encounter 
any encoding errors in this process, you need to look at what's going on 
there and determine if your source data is in a different encoding to 
what you thought it was (and use the right one). If it turns out that 
different parts of the data are in different encodings, that's something 
you will need to clean up.


You have two options for cleaning up such mis-encoded data. One way is 
to do the cleanup on the CSV data - say, using Python and the `csv' 
module to load it, test it, and make whatever repairs you deem necessary 
on a record-by-record basis before writing out a fixed CSV file.


The other way is to load the data into PostgreSQL as `bytea' or using 
the SQL-ASCII encoding, then do the cleanup in PostgreSQL. Personally I 
suspect a little Python cleanup script is easiest. The "dumb" way would 
be to try str.decode(...) and if it throws an exception catch it and try 
the next encoding in your list of suspects. Of course, you can do much 
smarter things than that with a little bit more work and some 
examination of the faulty data.


Once you have CSV input data of a known and correct encoding (say, 
UTF-8) you can just "set client_encoding" as appropriate  and \copy from 
psql.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-11-02 Thread Craig Ringer

Sam Mason wrote:

On Mon, Nov 03, 2008 at 08:18:54AM +0900, Craig Ringer wrote:

Joris Dobbelsteen wrote:

Also I still have to see an compression algorithm that can sustain over
(or even anything close to, for that matter) 100MB/s on todays COTS
hardware. As TOAST provides compression, maybe that data can be
transmitted in compressed manner  (without recompression).



I get 19 Mbit/s from gzip (deflate) on my 2.4GHz Core 2 Duo laptop. With
lzop (LZO) the machine achieves 45 Mbit/s. In both cases only a single
core is used. With 7zip (LZMA) it only manages 3.1 Mb/s using BOTH cores
together.


Your lzop numbers look *very* low; the paper suggests
compression going up to ~0.3GB/s on a 2GHz Opteron.


Er ... ENOCOFFEE? . s/Mb(it)?/MB/g . And I'm normally *so* careful about 
Mb/MB etc; this was just a complete thinko at some level. My apologies, 
and thanks for catching that stupid error.


The paragraph should've read:

I get 19 MB/s (152 Mb/s) from gzip (deflate) on my 2.4GHz Core 2 Duo 
laptop. With lzop (LZO) the machine achieves 45 MB/s (360 Mb/s). In both 
cases only a single core is used. With 7zip (LZMA) it only manages 3.1 
MB/s (24.8 Mb/s) using BOTH cores together.


So - it's potentially even worth compressing the wire protocol for use 
on a 100 megabit LAN if a lightweight scheme like LZO can be used.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-11-02 Thread Sam Mason
On Mon, Nov 03, 2008 at 08:18:54AM +0900, Craig Ringer wrote:
> Joris Dobbelsteen wrote:
> > Also I still have to see an compression algorithm that can sustain over
> > (or even anything close to, for that matter) 100MB/s on todays COTS
> > hardware. As TOAST provides compression, maybe that data can be
> > transmitted in compressed manner  (without recompression).

> I get 19 Mbit/s from gzip (deflate) on my 2.4GHz Core 2 Duo laptop. With
> lzop (LZO) the machine achieves 45 Mbit/s. In both cases only a single
> core is used. With 7zip (LZMA) it only manages 3.1 Mb/s using BOTH cores
> together.

The algorithms in the MonetDB/X100 paper I posted upstream[1] appears
to be designed more for this use.  Their PFOR algorithm gets between
~0.4GB/s and ~1.7GB/s in compression and ~0.9GBs and 3GB/s in
decompression.  Your lzop numbers look *very* low; the paper suggests
compression going up to ~0.3GB/s on a 2GHz Opteron.  In fact, in an old
page for lzop[2] they were getting 5MB/s on a Pentium 133 so I don't
think I'm understanding what your numbers are.

I'll see if I can write some code that implements their algorithms and
send another mail.  If PFOR really is this fast then it may be good for
TOAST compression, though judging by the comments in pg_lzcompress.c it
may not be worth it as the time spent on compression gets lost in the
noise.


  Sam

  [1] http://old-www.cwi.nl/themes/ins1/publications/docs/ZuHeNeBo:ICDE:06.pdf
  [2] http://www.oberhumer.com/opensource/lzo/#speed

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance of views

2008-11-02 Thread Webb Sprague
>> Am I right to avoid to VIEWS within application code?

How one uses views is more a matter of taste and best practices, than
a matter of rules like this.  Frankly, this "rule" sounds rather ill
conceived.

My feeling is that views can be difficult to maintain when they are
nested, but otherwise use them whenever they simplify matters.  I also
feel that they should only be defined for when they represent, well,
views of the data that make sense in a long term way; don't use them
if for a one-off application.

As for performance -- let me reiterate:  create the most elegant
design, possibly with views, and only worry about performance AFTER
PROFILING.

-W

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] defining an existing Table Schema for Foreign Key Constraint - Question

2008-11-02 Thread Sam Mason
On Sun, Nov 02, 2008 at 02:30:45PM -0800, Brian714 wrote:
> I would like to convert the column that used to be
> "credit_card_number" from the Customers table and turn it into a "cc_id"
> which is an integer that references the column "id" from the table
> Creditcards.
[...]
> Does anyone know of a script that I can use to do this? Am I supposed to use
> Triggers? How can this be done safely. I would like for the data to be
> consistent. Thank you guys in advance.

If you just want to move the data across all you need is a couple of SQL
statements:

  ALTER TABLE customers
ADD COLUMN cc_id INTEGER REFERENCES creditcards (id);

  UPDATE customers c SET cc_id = d.id
FROM creditcards d
WHERE c.credit_card_number = d.credit_card_number;

The first creates the new "cc_id" column in the customers table and the
second moves the data across.  For this to be valid, you really need
to make sure that there is only one "id" number for each credit card
number, the following is a standard way of doing this:

  SELECT credit_card_number, COUNT(*)
  FROM creditcards
  GROUP BY credit_card_number
  HAVING COUNT(*) > 1;

If you already have a UNIQUE constraint on the "credit_card_number"
column then this will be being enforced by the database already and
the check is redundant.  Another check would be that all the customers
credit cards are already in the "creditcards" table:

  SELECT c.*
  FROM customers c
LEFT JOIN creditcards d ON c.credit_card_number = d.credit_card_number
  WHERE c.credit_card_number IS NOT NULL
AND d.credit_card_number IS NULL;

I.e. give me all the customers where they have a credit card number yet
we can't find a matching entry.


On a slightly tangential note, why not use the "credit_card_number"
as the primary key in the creditcards table?  It looks like the
perfect example of a "natural key" and you wouldn't have to change the
"customers" table at all, except maybe to let the database check that
everything matches automatically:

  ALTER TABLE creditcards
ADD CONSTRAINT creditcards_ccnum_uniq
  UNIQUE (credit_card_number);

  ALTER TABLE customers
ADD FOREIGN KEY (credit_card_number)
  REFERENCES creditcards (credit_card_number);

After you've made sure everything still works, you may want to drop
the existing "id" out of the creditcards table and upgrade the unique
constraint to a full primary key:

  ALTER TABLE creditcards
DROP CONSTRAINT creditcards_pkey,
DROP CONSTRAINT creditcards_ccnum_uniq,
ADD PRIMARY KEY (credit_card_number),
DROP COLUMN id;

I'm a strong believer in natural keys at the moment, but it's somewhat
a religious battle!  A search on natural keys or surrogate keys should
give a reasonable view of the battle field.  Which way you go is a
design decision with trade-offs either way, but as you said you were
somewhat new to databases I thought a couple of examples could be
useful---I also get to think I've done something productive today!

Hum, I seem to have completely forgotten about triggers.  They are
useful if you want to automatically keep the customers table up-to-date
while maintaining the existing columns in the table.  Although, in this
case, it may be easier to use a "view" to present the new table designs
in a compatible way to how they were before, if you've got control over
the code that accesses these tables that may not even be needed.

Hope that helps!


  Sam

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance of views

2008-11-02 Thread Scott Marlowe
On Sun, Nov 2, 2008 at 4:11 PM, Simon Windsor
<[EMAIL PROTECTED]> wrote:
> Hi
>
> Generally, I have avoided using VIEWS within application code and only used
> them for client interfaces, the sole reason being the performance of views
> against tables.

Have you confirmed this belief with tests?  Or just following some
kind of general knowledge that's filtered down from other dbas?

> Changes to database over the past few years appear to have improved the
> performance of views, but I am still not comfortable with using VIEWS within
> application code. The main reasons I have are

This is true for some other dbms'.  It's generally not true for
PostgreSQL.  PostgreSQL views have for a very long history of running
as queries the same as if you ran them by hand.  This is due to the
use of an advanced rules system that allows for the same performance
of views as for the original queries on the table.

>  * Data within a view is not necessary sequential, unlike a table

Data in a pgsql view is EXACTLY the same as if you ran the query by hand.

>  * Higher overhead mapping to original tables and indexes

The overhead in pgsql is measured in sub  millisecond time.

>  * Danger of linking views and tables and not utilising utilising
>underlying tables properly.

not sure what you mean there at all.

>
> Am I right to avoid to VIEWS within application code?

Yes.  Absolutely.  If you're running MySQL.  OTOH, if you're in pgsql
there's no real reason to avoid them.  unless you'll also be
supporting mysql.  then mysql plays the lowest common denominator
trump card it so often does and forces you to dumb down your db layer
to make it happy.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Performance of views

2008-11-02 Thread Simon Windsor

Hi

Generally, I have avoided using VIEWS within application code and only 
used them for client interfaces, the sole reason being the performance 
of views against tables.


Changes to database over the past few years appear to have improved the 
performance of views, but I am still not comfortable with using VIEWS 
within application code. The main reasons I have are


  * Data within a view is not necessary sequential, unlike a table
  * Higher overhead mapping to original tables and indexes
  * Danger of linking views and tables and not utilising utilising
underlying tables properly.

Am I right to avoid to VIEWS within application code?

Simon

--
Simon

Simon Windsor
Eml: [EMAIL PROTECTED]
Tel: 01454 617689
Mob: 07960 321599


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Re[GENERAL] defining an existing Table Schema for Foreign Key Constraint - Question

2008-11-02 Thread brian

Brian714 wrote:

Hello everyone,

I am new to the forum and fairly new to databases (particularly PostgreSQL).
I have done some searching on the internet and can't really get a hold of an
answer to my question. So here it goes:

I am working on a Customer Purchasing-based project where I must use a
database with existing customer data. The database currently follows the
following schema for two tables:

Creditcards Table
id:integer -- primary key
credit_card_number:varchar(16)
name_on_card:varchar(100)
expiration:date

Customers Table
id:integer -- primary key
first_name:varchar(50)
last_name:varchar(50)
credit_card_number:varchar(16)
address:varchar(200)
email:varchar(50)
password:varchar(20)

Currently, the database contains thousands of records in the Customers and
Creditcards tables. I would like to re-define the Customers table to follow
the following schema:

Customers Table
id:integer -- primary key
first_name:varchar(50)
last_name:varchar(50)
cc_id:integer references Creditcards.id
address:varchar(200)
email:varchar(50)
password:varchar(20)

As you can see, I would like to convert the column that used to be
"credit_card_number" from the Customers table and turn it into a "cc_id"
which is an integer that references the column "id" from the table
Creditcards.

I would like for the Customers.cc_id to match with a Creditcards.id that has
the same credit_card_number.


This should do it:


ALTER TABLE Customers ADD COLUMN cc_id INTEGER;

UPDATE Customers AS c SET cc_id = Creditcards.id FROM Creditcards WHERE 
c.credit_card_number = Creditcards.credit_card_number;



(I'm not sure of the best way to use table aliases in the above statement.)


ALTER TABLE Customers DROP COLUMN credit_card_number;

ALTER TABLE Customers ADD CONSTRAINT fk_credit_card_id FOREIGN KEY 
(cc_id) REFERENCES Creditcards (id) ON DELETE CASCADE;


I think those last two can be rolled into one statement but it doesn't 
hurt to separate them.


I'm assuming here that Creditcards.id is a SERIAL type. Also, you might 
want to read up on foreign keys and decide on the best ON DELETE 
scenario for your situation.



I would like for the data to be consistent.


Indeed ;-)

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-11-02 Thread Craig Ringer
Joris Dobbelsteen wrote:

> Also I still have to see an compression algorithm that can sustain over
> (or even anything close to, for that matter) 100MB/s on todays COTS
> hardware. As TOAST provides compression, maybe that data can be
> transmitted in compressed manner  (without recompression).

I did a few quick tests on compression speed, as I was curious about
just what sort of performance was available. I was under the impression
that modern hardware could easily top 100 Mbit/s with common compression
algorithms, and wanted to test that.

Based on the results I'd have to agree with the quoted claim. I was
apparently thinking of symmetric encryption throughput rather than
compression throughput.

I get 19 Mbit/s from gzip (deflate) on my 2.4GHz Core 2 Duo laptop. With
lzop (LZO) the machine achieves 45 Mbit/s. In both cases only a single
core is used. With 7zip (LZMA) it only manages 3.1 Mb/s using BOTH cores
together.

All tests were done on a 278MB block of data that was precached in RAM.
Output was to /dev/null except for the LZMA case (due to utility
limitations) in which case output was written to a tmpfs.

Perhaps a multi-core and/or SIMD-ized implementation of LZO (if such a
thing is possible or practical) might manage 100 Mbit/s, or you might
pull it off on an absolutely top of the range desktop (or server) CPU
like the 3.3 GHz Core 2 Duo. Maybe, but probably not without
considerable overclocking, which eliminates the "COTS" aspect rather
soundly.

Given that very few people have dedicated gzip (or other algorithm)
acceleration cards in their systems, it looks like it should be faster
to do transfers uncompressed over a network of any respectable speed.
Not entirely surprising, really, or it'd be used a lot more in common
file server protocols.

Wire protocol compression support in PostgreSQL would probably still be
extremely useful for Internet or WAN based clients, though, and there
are probably more than a few of those around. I know it'd benefit me
massively, as I have users using PostgreSQL over 3G cellular radio
(UMTS/HSDPA) where real-world speeds are around 0.1 - 1.5 Mbit/s, data
transfer limits are low and data transfer charges are high.

Compression would clearly need to be a negotiated connection option, though.

Interestingly, the Via thin clients at work, which have AES 256 (among
other things) implemented in hardware can encrypt to AES 256 at over 300
MB/s. Yes, megabytes, not megabits. Given that the laptop used in the
above testing only gets 95 MB/s, it makes you wonder about whether it'd
be worthwhile for CPU designers to offer a common compression algorithm
like LZO, deflate, or LZMA in hardware for server CPUs.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re[GENERAL] defining an existing Table Schema for Foreign Key Constraint - Question

2008-11-02 Thread Brian714

Hello everyone,

I am new to the forum and fairly new to databases (particularly PostgreSQL).
I have done some searching on the internet and can't really get a hold of an
answer to my question. So here it goes:

I am working on a Customer Purchasing-based project where I must use a
database with existing customer data. The database currently follows the
following schema for two tables:

Creditcards Table
id:integer -- primary key
credit_card_number:varchar(16)
name_on_card:varchar(100)
expiration:date

Customers Table
id:integer -- primary key
first_name:varchar(50)
last_name:varchar(50)
credit_card_number:varchar(16)
address:varchar(200)
email:varchar(50)
password:varchar(20)

Currently, the database contains thousands of records in the Customers and
Creditcards tables. I would like to re-define the Customers table to follow
the following schema:

Customers Table
id:integer -- primary key
first_name:varchar(50)
last_name:varchar(50)
cc_id:integer references Creditcards.id
address:varchar(200)
email:varchar(50)
password:varchar(20)

As you can see, I would like to convert the column that used to be
"credit_card_number" from the Customers table and turn it into a "cc_id"
which is an integer that references the column "id" from the table
Creditcards.

I would like for the Customers.cc_id to match with a Creditcards.id that has
the same credit_card_number.

Does anyone know of a script that I can use to do this? Am I supposed to use
Triggers? How can this be done safely. I would like for the data to be
consistent. Thank you guys in advance.
-- 
View this message in context: 
http://www.nabble.com/Redefining-an-existing-Table-Schema-for-Foreign-Key-Constraint---Question-tp20294223p20294223.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error in installation...

2008-11-02 Thread Raymond O'Donnell
On 02/11/2008 19:15, Shashank Mujumdar wrote:

> After this nothing happens..The command prompt does not appear..As if
> the cursor is waiting for some input or something like that..
> Please let me know if I missed something in installation process..What
> might be the problem ???

I meant to add that you're not seeing a command prompt coming back
because Windows does not have an "execute in the background" command
like Linux, etc. - so the postgres process will continue to run in the
window from which you started it until you tell it to stop by pressing
Ctrl-C or some such. You'll get your command prompt back then.

If you're going to be running PG a lot on this machine, or running it on
a production server, the better way really is to install it as a
service, so that it can continue to run even when nobody is logged in.

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PGDAY2008 (PostgreSQL)

2008-11-02 Thread Emanuel CALVO FRANCO
/*--

Bienvenidos a:
PGDay 2008 Rio de la Plata
Organizado por la comunidad de PostgreSQL de Bs As (BaPUG)
Web: www.postgres-arg.org




El día Sábado 22 de noviembre de 2008, se celebrará el 1er
PGDay de Argentina, el cual intenta concentrar a la comunidad de
latinoamérica con el fin de organizar los proyectos como comunidad
en el cono sur.

De más está decir que la base de datos de código abierto más
avanzada y pontente, se merecia que le dedicaramos este tipo de evento
que, en países del norte se celebra asiduamente, especialmente en
ambientes académicos.

La entrada al evento es gratuita, se solicita unicamente la
inscripción por temas de seguridad para entrar. La dirección de correo
para anotarse es "mequieroanotar |en| postgres-arg |punto| org".

El evento contará con la participación de gente del exterior y
del interior del país. Contará con la presencia de David Fetter (Core
Team) y estamos en la tratativa para traer a Alvaro Herrera de Chile (
Major Contributor), de acuerdo a los sponsors que vayan surgiendo.

Aún estamos rececionando las peticiones para aquellas empresas
que quieran salir como patrocinadores del evento, las cuales podrán
elegir entre 3 posibles rangos: Cluster, Base de Datos y Tabla.

El evento está avalado por www.postgresql.org.


TEMAS A TRATAR EN EL EVENTO

Las minutas del evento fueron aprobadas y los temas a tratar son
documentación, desarrollo, promoción, plataformas y nuevas características
de la versión en desarrollo 8.4 .

Para más información al respecto dirijirse a pgday.postgres-arg.org
en la pestaña de "Charlas y Minutas".

Se pueden presentar charlas que pueden ser elegidas como temas para
ser tratados, las cuales luego de un proceso de selección podrían tener su
espacio.

También se trataran temas de la organización de la comunidad ArPUG,
y de los Pugs zonales a fin de descentralizar eventos y proyectos de cada uno.
También se tratarán posibles eventos a realizarse en el interior del país.



PATROCINADORES HASTA EL MOMENTO

Cluster: GLOBANT(www.globant.com).
Base de Datos: .
Tabla:.


CIERRE

Para finalizar gracias por haber leído hasta aquí, los invitamos a
participar y a colaborar con el proyecto PostgreSQL.

Pueden suscribirse a la lista [EMAIL PROTECTED], para
estar al tanto de los temas de la comunidad.

Pueden visitar también www.postgres-arg.org (web de ArPug).

Si desean realizar eventos o tienen noticias para publicar, solo
escriban a "admin |en| postgres-arg |punto| org".

Desde ya, muchas gracias y los esperamos 



   BaPUG
(Bs. As. Pgsql User's Group)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error in installation...

2008-11-02 Thread Raymond O'Donnell
On 02/11/2008 19:15, Shashank Mujumdar wrote:

> C:\Program Files\PostgreSQL\8.3\bin> postgres -D C:\path\to\pgsqldata
> 
> when I entered the command I got the following notification..
> 
> LOG:  database system was shut down at 2008-11-02 18:17:13 IST
> LOG:  database system is ready to accept connections
> LOG:  autovacuum launcher started
> 
> After this nothing happens..The command prompt does not appear..As if
> the cursor is waiting for some input or something like that..
> Please let me know if I missed something in installation process..What
> might be the problem ???

That's all fine - what you've done is to start the database server
process, which is listening for connections on port 5432 (if you
accepted the default). Nothing else will happen in that window until you
or someone else connects to this process. Open another terminal window,
and connect to it with psql:

  C:\Program Files\PostgreSQL\8.3\bin\psql -U postgres postgres

This will connect you to the database "postgres" as user "postgres".
You'll probably see some more logging stuff appear in the original
window, depending on how high your logging levels are set.

HTH,

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question, re: Running enormous batch file from the command line

2008-11-02 Thread Martijn van Oosterhout
On Sun, Nov 02, 2008 at 02:13:34PM -0600, Milarsky, Jeremy F. wrote:
> I am attempting to move an unusually large legacy database from old DBF
> files into a Postgres database. I've been able to clean up the DBF
> tables, export them to CSV and convert that CSV into a SQL batch file.



At a wild guess it probably thinks that everything between line 2279459
and 3117024 is one line, because you have an uneven number of quotes
and strings can contain a newline. So it thinks the whole lot is one
line.

Fix the quoting and I imagine this problem will also go away.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while 
> boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


[GENERAL] Question, re: Running enormous batch file from the command line

2008-11-02 Thread Milarsky, Jeremy F.
I am attempting to move an unusually large legacy database from old DBF
files into a Postgres database. I've been able to clean up the DBF
tables, export them to CSV and convert that CSV into a SQL batch file.

 

My batch file contains 10 million INSERT queries to populate the
corresponding table in Postgres.

 

When I run it, I see only three errors, all of them having to do with my
admitted failure to escape single quotes in one of my fields when I
parsed the CSV.

 

[my_command_line_prompt]$ psql mydatabase -f mybatchfile.sql > log.txt

Password:

psql:mybatchfile.sql:2279459: ERROR:  syntax error at or near "P" at
character 68

psql:mybatchfile.sql:2279459: LINE 1: ...VALUES
('X','SMITH','','JOHN',''','P','09','3...

psql:mybatchfile.sql:2279459:
^

psql:mybatchfile.sql:3117024: ERROR:  syntax error at or near "','" at
character 64

psql:mybatchfile.sql:3117024: LINE 1: ...mf1 VALUES
('X','SMITH','','JOHN',''',' ','10','28...

psql:mybatchfile.sql:3117024:
^

psql:mybatchfile.sql:6775763: ERROR:  syntax error at or near "P" at
character 67

psql:mybatchfile.sql:6775763: LINE 1: ... VALUES
('XX','SMITH','','JOHN',''','P','01','0...

psql:mybatchfile.sql:6775763:

 

However, when afterwards I go into the psql prompt and do a SELECT
COUNT(*) ... on the table, I see there are only 4.5 million or so
records in the table. Appropriately enough, the log.txt file I'm
outputting to in the aforementioned command has 4.5 million or so lines,
each saying "INSERT 0 1".

 

So is Postgres skipping over some 5+ million lines in my batch file? If
so, why isn't it spitting only 3 error messages out? How else can I see
what's going wrong with those 5 million INSERT commands? Are there any
other factors which would cause this sort of data loss?

 

I have, for the record, attempting to get all 10 million rows in with a
single COPY command. This method has not been successful, apparently due
to client encoding (despite executing several different "set
client_encoding" in each attempt).

 

If this sort of question has been asked before, I apologize - but I did
search the mailing list archive before posting. 

 

Thanks in advance.

 

==

Jeremy Milarsky



Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-11-02 Thread Joris Dobbelsteen

Gregory Stark wrote, On 01-11-08 14:02:

Ivan Sergio Borgonovo <[EMAIL PROTECTED]> writes:


But sorry I still can't get WHY compression as a whole and data
integrity are mutually exclusive.

...

[snip performance theory]


Postgres *guarantees* that as long as everything else works correctly it
doesn't lose data. Not that it minimizes the chances of losing data. It is
interesting to discuss hardening against unforeseen circumstances as well but
it's of secondary importance to first of all guaranteeing 100% that there is
no data loss in the expected scenarios.

That means Postgres has to guarantee 100% that if the power is lost mid-write
that it can recover all the data correctly. It does this by fsyncing logs of
some changes and depending on filesystems and drives behaving in certain ways
for others -- namely that a partially completed write will leave each byte
with either the new or old value. Compressed filesystems might break that
assumption making Postgres's guarantee void.


The guarentee YOU want from the underlaying file system is that, in case 
of, lets say, a power failure:


* Already existing data is not modified.
* Overwritten data might be corrupted, but its either old or new data.
* If an fsync completes, all written data IS commited to disk

If an (file) system CAN guarantee that, in any way possible, it is safe 
to use with PostGreSQL (considering my list is complete, of course).


As a side note: I consider the second assumption a bit too strong, but 
there are probably good reasons to do so.



I don't know how these hypothetical compressed filesystems are implemented so
I can't say whether they work or not. When I first wrote the comment I was
picturing a traditional filesystem with each block stored compressed. That
can't guarantee anything like this. 


Instead the discussion reverts to discussing file systems without having 
even a glance at their method of operation. No algorithm used by the 
file system is written down, but these are being discussed.



However later in the discussion I mentioned that ZFS with an 8k block size
could actually get this right since it never overwrites existing data, it
always writes to a new location and then changes metadata pointers. I expect
ext3 with data=journal might also be ok. These both have to make performance
sacrifices to get there though.


Instead, here we are going to specifics we needed a long time ago: ZFS 
takes 8kB as their optimal point(*), and never overwrite existing data. 
So it should be as safe as any other file system, if he is indeed correct.


Now, does a different block size (of ZFS or PostGreSQL) make any 
difference to that? No, it still guarentees the list above.


Performance is a discussion better left alone, since it is really really 
dependent on your workload, installation and more specifics. It could be 
better and it can be worse.


- Joris


(*) Larger block sizes improve compression ratio. However, you pay a 
bigger penalty on writes, as more must be read, processed and written.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Error in installation...

2008-11-02 Thread Shashank Mujumdar
Hi..

I've this problem in installation...I installed the postgresql-8.3
version without the service...Then to initialize I gave the command
initdb -D path\ in the command prompt...It showed some notifications
and it finally said to start the database server I needed to enter the
command

C:\Program Files\PostgreSQL\8.3\bin> postgres -D C:\path\to\pgsqldata

when I entered the command I got the following notification..

LOG:  database system was shut down at 2008-11-02 18:17:13 IST
LOG:  database system is ready to accept connections
LOG:  autovacuum launcher started

After this nothing happens..The command prompt does not appear..As if
the cursor is waiting for some input or something like that..
Please let me know if I missed something in installation process..What
might be the problem ???

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Slow query performance

2008-11-02 Thread Joris Dobbelsteen

Kevin Galligan wrote, On 29-10-08 23:35:

An example of a slow query is...

select count(*) from bigdatatable where age between 22 and 40 and state 
= 'NY';


explain analyze returned the following...

 Aggregate  (cost=5179639.55..5179639.56 rows=1 width=0) (actual 
time=389529.895..389529.897 rows=1 loops=1)
   ->  Bitmap Heap Scan on bigdatatable  (cost=285410.65..5172649.63 
rows=2795968 width=0) (actual time=6727.848..387159.175 
rows=2553273 loops=1)

 Recheck Cond: ((state)::text = 'NY'::text)
 Filter: ((age >= 22) AND (age <= 40))
 ->  Bitmap Index Scan on idx_jstate  (cost=0.00..284711.66 
rows=15425370 width=0) (actual time=6298.950..6298.950 
rows=16821828 loops=1)

   Index Cond: ((state)::text = 'NY'::text)
 Total runtime: 389544.088 ms

It looks like the index scans are around 6 seconds or so each, but then 
the bitmap heap scan and aggregate jump up to 6 mintues.


Indeed. Its cause is that PostGreSQL must traverse the data in order to 
verify if the data is valid for the transaction. This means A LOT of 
data must be retrieved from disk.


The only real thing you can do is reduce I/O load, by reducing the 
amount of data that must be traversed (or ensuring the data is stored 
closely together, but thats really hard to get right). This requires 
optimizing your database design for that single goal.
This will not make it scale any better than it currently does, however. 
The query will scale O(N) with the size of your table, you want other 
techniques to do better.


Another thing is spending extra money on hardware that can sustain 
higher I/O seek rates (more and/or faster spindles).


- Joris


More detail on the table design and other stuff in a bit...


On Wed, Oct 29, 2008 at 6:18 PM, Scott Marlowe <[EMAIL PROTECTED] 
> wrote:


On Wed, Oct 29, 2008 at 2:18 PM, Kevin Galligan <[EMAIL PROTECTED]
> wrote:
 > I'm approaching the end of my rope here.  I have a large database.
 > 250 million rows (ish).  Each row has potentially about 500 pieces of
 > data, although most of the columns are sparsely populated.

A couple of notes here.  PostgreSQL stores null values as a single bit
in a bit field, making sparsely populated tables quite efficient as
long as you store the non-existent values as null and not '' or some
other real value.

Have you run explain analyze on your queries yet?  Pick a slow one,
run explain analyze on it and post it and we'll see what we can do.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there plans to add data compression feature to postgresql?

2008-11-02 Thread Joris Dobbelsteen

Grzegorz Jaśkiewicz wrote, On 30-10-08 12:13:


it should, every book on encryption says, that if you compress your data 
before encryption - its better.


Those books also should mention that you should leave this subject to 
experts and have numerous examples on systems that follow the book, but 
are still broken. There are other techniques as well that make breaking 
it harder, such as the CBC and CTS modes.


Using compression consumes processing power and resources, easing DoS 
attacks a lot.


Also I still have to see an compression algorithm that can sustain over 
(or even anything close to, for that matter) 100MB/s on todays COTS 
hardware. As TOAST provides compression, maybe that data can be 
transmitted in compressed manner  (without recompression).


- Joris

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error in Adding All Tables

2008-11-02 Thread Andreas Kretschmer
Joshua D. Drake <[EMAIL PROTECTED]> schrieb:

> salman Sheikh wrote:
>>
>>
>> Hi freinds,
>> i wanted to add my all tables once in MFC application,
>>
>> i have a databank,which has 11 tables and i want to add them all togather.
>>
>>  After Adding all tables, it shows me always this errors by debugging.
>>
>>
>> ERROR: column reference "ctid" is ambiguous;
>> Error while executing the query
>
> This means you did this:
>
> SELECT ctid,ctid from table1, table2
>
> But it doesn't know which ctid belongs to which table. You need to use  
> aliases or dotted notation.

For the example above, for instance:

select a.ctid, b.ctid from table1 a, table2 b ... (if you want this
columns)


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general