Re: [GENERAL] recovery from xid wraparound

2006-10-24 Thread Martijn van Oosterhout
On Tue, Oct 24, 2006 at 07:43:15AM +0100, Shane Wright wrote:
 Anyway - not noticed any data loss yet and was hoping it would be such
 that if all tables had been vacuumed recently (including system catalog
 tables), that there would be no remaining rows that would appear to
 have a future xid and so the database should be ok?

Running vacuum is the right solution, but I think you have to let it
finish. In particular, in that version a database-wide vacuum has to
complete before it will update the datfrozenxid (it's not tracked per
table).

 a) is my assumption about the database being ok correct - assuming all
 tables have been vacuumed recently, including catalog tables?

Should be ok, but apparently you missed one, or didn't do a database
wide vacuum.

 b) is it possible to safely abort my whole table vacuum now so I can
 run it at the weekend when there's less traffic?

Aborting vacuum is safe, but you have to do a database-wide vacuum at
some point.

 c) if I have experienced data loss, on the assumption all the table
 structure remains (looks like it does), and I have a working backup
 from before the xid wraparound (I do), can I just reinsert any
 detected-missing data at the application level without needing a
 dump/reload?

A VACUUM will recover any data that slipped beyond the horizon less
than 1 billion transactions ago, which I think covers you completely.
The only issue is that unique indexes may be confused because new
conflicting data may have been inserted while the old data was
invisible. Only you can say if that's going to be an issue.

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] recovery from xid wraparound

2006-10-24 Thread Martijn van Oosterhout
On Tue, Oct 24, 2006 at 11:06:01AM +0100, Shane Wright wrote:
 If I was to abort this vacuum, given that all other tables are vacuumed
 (including system catalog tables), what's the worst case scenario? -
 given that more transactions are happening on the database 

Only tables that havn't been vacuumed in the last billion transactions
are at risk. It's possible that if you've vacuumed that large table
recently by itself that all the data is actually safe, just the system
doesn't realise it.

Just make sure you've really covered *all* the system tables. If they
go you get really wierd results.

 If I understand correctly, it would be that some rows could disappear
 from this large unvacuumed table if their xid was too old - but no other
 consequence?

The VACUUM would make them reappear. To truly disappear they would
have to be 3 billion transactions old. That leaves the unique index
issue I mentioned.

 (fully aware that a db-wide vacuum is needed, but if it can [safely]
 wait for the weekend that would be preferable)

That's risk-management. For example, all the really old tuples are
possibly near the beginning of the table, thus this current vacuum will
have fixed them already. But to get a handle on that you need to
analyse your tuple turnover and usage ratio.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] recovery from xid wraparound

2006-10-24 Thread Shane Wright
Martijn,

Thanks,

Just make sure you've really covered *all* the system tables. If they
go you
get really weird results.

I've been under the impression system tables get done first, then
user(me)-created tables after - which means my previous [aborted]
attempts at vacuuming them would have covered it, unless I'm missing
something?

(db was created by initdb, then pg_restore to load data into it,
database was then vacuumed before production work began)

I've looked at the list of catalog tables from 7.4's docs (URL below),
and all 28 have been processed in this vacuum, so presumably same order
for previous attempts:
http://www.postgresql.org/docs/7.4/static/catalogs.html

Checked with 'grep vacuuming vacuum.log | grep pg_ | grep toast -v |
sort | uniq'

Does this sound like a fair assumption?

(it is on the first database in the cluster, these aren't coming up from
other databases)

Many thanks for your help!

S


-Original Message-
From: Martijn van Oosterhout [mailto:[EMAIL PROTECTED] 
Sent: 24 October 2006 11:50
To: Shane Wright
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] recovery from xid wraparound


On Tue, Oct 24, 2006 at 11:06:01AM +0100, Shane Wright wrote:
 If I was to abort this vacuum, given that all other tables are 
 vacuumed (including system catalog tables), what's the worst case 
 scenario? - given that more transactions are happening on the database

Only tables that havn't been vacuumed in the last billion transactions
are at risk. It's possible that if you've vacuumed that large table
recently by itself that all the data is actually safe, just the system
doesn't realise it.

Just make sure you've really covered *all* the system tables. If they go
you get really wierd results.

 If I understand correctly, it would be that some rows could disappear 
 from this large unvacuumed table if their xid was too old - but no 
 other consequence?

The VACUUM would make them reappear. To truly disappear they would have
to be 3 billion transactions old. That leaves the unique index issue I
mentioned.

 (fully aware that a db-wide vacuum is needed, but if it can [safely] 
 wait for the weekend that would be preferable)

That's risk-management. For example, all the really old tuples are
possibly near the beginning of the table, thus this current vacuum will
have fixed them already. But to get a handle on that you need to analyse
your tuple turnover and usage ratio.

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability 
 to litigate.

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] postgres under Suse linux

2006-10-24 Thread Antonios Katsikadamos
Hi all I am a new linux and postgres user and i don't
know how i canconfigure the postgres on suse linux in
order to make it run.

I would be thankful for any tip.

kind regards

Antonios

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] PostgreSQL in article I wrote

2006-10-24 Thread Walter Vaughan
I wrote an article that is in this month's SEMA News Show Issue that features 
postgreSQL amoung other open source projects.


I had to write this article at a very high level (99% of the readers are car 
nuts and mostly senior level management), but any positive mention of postgreSQL 
should be a good thing.


http://www.sema.org/main/semaorghome.aspx?id=56095

The article was on four pages in the magazine and reads a little better.

--
Walter

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] postgres under Suse linux

2006-10-24 Thread Richard Broersma Jr
 Hi all I am a new linux and postgres user and i don't
 know how i canconfigure the postgres on suse linux in
 order to make it run.
 I would be thankful for any tip.

The postgresql installation documentation is in the manual.  Also, notice the 
user comments at the
bottom.  Some comments are from suse users.

http://www.postgresql.org/docs/8.1/interactive/installation.html

Regards,

Richard Broersma Jr.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] recovery from xid wraparound

2006-10-24 Thread Shane Wright
Tom,

Thanks

But are there just 28 (the 28 that have been vacuumed), or are there more (in 
7.4).

Happy there's no guarantee, but would help to know any possible damager in my 
current situation,

Thanks

S



-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: 24 October 2006 15:23
To: Shane Wright
Cc: Martijn van Oosterhout; pgsql-general@postgresql.org
Subject: Re: [GENERAL] recovery from xid wraparound 


Shane Wright [EMAIL PROTECTED] writes:
 Just make sure you've really covered *all* the system tables.

 I've been under the impression system tables get done first, then 
 user(me)-created tables after -

No, there's no such guarantee.  A database-wide vacuum just does the tables in 
the order it finds 'em in pg_class.

regards, tom lane


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] recovery from xid wraparound

2006-10-24 Thread Shane Wright
Hi

I'm running 7.4 on RHAS 4, and I think I've had a transaction id
wraparound issue.  Running the command below gives the suitably
worrying negative number:

emystery=# SELECT datname, age(datfrozenxid) FROM pg_database;
 datname  | age
--+-
 [maindbname] | -2081610471
 [otherdbname]  |  1075601025
 [otherdbname] |  1257289757
 [otherdbname]  |  1074582099
 [otherdbname]   |  1257289757


Which is weird - because I have vacuumed the database quite a lot -
both individual tables and I thought a vacuum of the whole database a
month or so ago.

Anyway - not noticed any data loss yet and was hoping it would be such
that if all tables had been vacuumed recently (including system catalog
tables), that there would be no remaining rows that would appear to
have a future xid and so the database should be ok?

Obviously I'm now doing the write thing with a vacuumdb -a - however
this has been running 9 hours  now and looks like at least 7 hours to
go just on this one monstrous table

in the interests of risk reduction I've just knocked up a script to run
ahead and quickl vacuum all the other tables.

But my questions are thus...

a) is my assumption about the database being ok correct - assuming all
tables have been vacuumed recently, including catalog tables?

b) is it possible to safely abort my whole table vacuum now so I can
run it at the weekend when there's less traffic?

c) if I have experienced data loss, on the assumption all the table
structure remains (looks like it does), and I have a working backup
from before the xid wraparound (I do), can I just reinsert any
detected-missing data at the application level without needing a
dump/reload?

Any help appreciated in this really not-fun time,

thanks

S


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] What is causing 'canceling statement due to user request' ?

2006-10-24 Thread Csaba Nagy
Hi all,

I know of 2 causes:

 - hit CTRL-C in the psql client;
 - have a non-zero statement timeout and have the statement actually
time out;

But I am seeing this via JDBC which can't cancel a statement AFAIK, and
the statement_timeout is set to 0 (in the config file globally, and
there's no override for the users, nor in our JDBC code).

I wonder if there is any other scenario which can trigger this error ? I
found a lot of them in our logs, last night we had hundreds of them, but
now that I looked it happened sporadically in the past too...

Could some network problem trigger this ?

TIA,
Csaba.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] recovery from xid wraparound

2006-10-24 Thread Tom Lane
Shane Wright [EMAIL PROTECTED] writes:
 Incidentally, how many passes of a table can vacuum make!

Lots, especially if the table hasn't been vacuumed in a long time...
Perhaps you should be using a higher maintenance_work_mem?
(Um, in 7.4 make that vacuum_mem.)  Larger work memory translates
directly to fewer passes over the indexes.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] recovery from xid wraparound

2006-10-24 Thread Martijn van Oosterhout
On Tue, Oct 24, 2006 at 03:47:52PM +0100, Shane Wright wrote:
 
 Incidentally, how many passes of a table can vacuum make!  Its currently
 on its third trip through the 20Gb of indices, meaning another 7 hours
 till completion [of this table]!.
 
 Assume it only does three passes?  (it chooses based on the table
 continuing to be updated while vacuum is running)

It depends on how many tuples it needs to process and how much memory
you gave it (the maintainence_work_mem settings). The more memory you
give it, the less passes it needs to do...

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] recovery from xid wraparound

2006-10-24 Thread Tom Lane
Shane Wright [EMAIL PROTECTED] writes:
 If I change vacuum_mem I'll need to at least 'pg_ctl reload' - will it apply 
 straightaway with the next vacuum query or does it need a full restart?

reload is enough.

 Basically if its just datfrozenxid that's not updated I can live with 
 delaying the vacuum a few days.  But if things are more serious then 
 obviously I can't wait.

The question is how close to the wraparound horizon is any of your data.
We don't really know that --- the datfrozenxid provides a lower bound
but we don't know where things are in reality.  Also, are you prepared
to tolerate wrong answers (missing rows) for awhile?  As Martijn
mentioned, the vacuum will retrieve rows that have slid past the wrap
horizon, but they'll have been invisible to your queries meanwhile.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Google Co-op

2006-10-24 Thread Weslee Bilodeau
Thought this might be useful for others.

I'm commonly doing searches against the documentation, mailing lists,
pgforge/gborg, etc. almost daily for PostgreSQL specific items.

Google is nice for this, but you have to tweak your site: search to get
just what you want. There is also the helpful pgsql.ru.

Found Google Co-op and created a custom search engine much the same as
pgsql.ru.

http://www.google.com/coop/cse?cx=004235346677000918633%3A3pxavm78ixo

Searches only PostgreSQL sites.

Its also easy to create your own CSE and add/remove sites in the search
to whatever you want (takes all of about 5 minutes).

Weslee

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] benchmark suite

2006-10-24 Thread Ray Stell

I find the following comment in dbt2 users guide: The test kit
currently only supports SAP DB but work is currently being done to
support PostgresSQL.  In the source tree of dbt2-0.39 has the file
./README-POSTGRESQL.  Is this the entry point doc that a postgresql 
user should start with to begin to use this product?

pgbench-1.1 seems to work with some tweaks to Makefile.  Can anyone
attest to the value?



On Fri, Sep 29, 2006 at 09:54:09AM -0400, Andrew Sullivan wrote:
 On Fri, Sep 29, 2006 at 07:27:49PM +0530, km wrote:
  Is there any good benchmark suite for testing postgresql performance?
 
 I suggest looking at the excellent software provided by OSDL.  
 
 http://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_suite/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] 8.2beta1 installation fails

2006-10-24 Thread Andrus
I ran msi installation package downloaded from postgresql.org from remote
desktop.
I selected Estonian locale, UTF-8 database encoding.

After that I got error

Failed to run initdb: !128
Please see the logfile in 'C:\program
Files\PostgreSQL\8.2beta1\tmp\initdb.log'.
Note! You must read/copy this logfile before yuo click OK, or it will be
automatically removed.

(This message is handwritten and may contains some typos since Ctrl+C does
not work in error message box).

I leaved the message box in screen  and looked into C:\Program
Files\PostgreSQL\8.2beta1\tmp directory.
This directory was empty.

How to install 8.2 from msi file with cluster initialization ?


Andrus.

Environment: Windows 2003 Enterprise server build 3790 (service pack 1)




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Call for Donations

2006-10-24 Thread Joshua D. Drake
Hello,

You can read a lot of stuff below, or you can just donate:

http://www.postgresql.org/about/donate

As some of you know (most of you won't) I am the PostgreSQL SPI Liason.
What does that mean? Well you can read more about that here:

http://fundraising.postgresql.org/

I am writing today because we need to start a fund raising drive for a
few upcoming expenses. This is not an exhaustive list. It is just what
is on *my* immediate radar. This email is *not* about fund raising
ideas. If you wish to present those, please start a new thread. Ideas
are welcome.

Here are some specific expenses I would like to see us raise some money for:

8.2 CD . When 8.2 hits, I want to make sure that we have a new batch for
all upcoming shows.

Talks: We have many people starting to do PostgreSQL specific talks. I
would like to sponsor these talks as we are able. One of the conditions
of being sponsored for a talk, is that your presentation material be
made available to the community.

Advocacy material: currently we dont have any dedicated advocacy
material for postgresql booths, which leaves us at the mercy of donors
for every show. while some of these items have been quite good, we are
not able to present a consistent, high level of quality for our
handouts. I would like to have some of this material created and printed
specifically for the community (without corporate advertisements)

Signs: We need a new banner for the shows. The one that CMD printed up
might make it through USENIX LISA but isn't going to make it much
farther. The one that JoshB has has begun to yellow.

Shirts: I would like to start printing shirts for shows. The idea would
be to take initial donations to print the shirt (high quality) and then
give the shirts to people who donate more than -x- money at a show.

Pins. We need to run a batch of pins.

So if you would like to help PostgreSQL financially in it's further
Advocacy efforts please donate:

http://www.postgresql.org/about/donate


Recent Successes:
The PostgreSQL Anniversary Summit.
Robert Treat at PHP/Works
Chris Browne at LinuxFest Ohio
8.1 CDs

Upcoming Successes:
USENIX Lisa
David Fetter, Conisli, São Paulo, Brazil (keynote)
David Fetter, FOSS.IN/2006, Bangalore, India
Linux.Conf.Au/PostgreSQL (Still working on this with Gavin Sherry)
OSBC 07
OSCON 07
LinuxWorld West 07

Sincerely,

Joshua D. Drake


-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] 8.2beta1 installation fails

2006-10-24 Thread Magnus Hagander
 I ran msi installation package downloaded from postgresql.org 
 from remote desktop.
 I selected Estonian locale, UTF-8 database encoding.
 
 After that I got error
 
 Failed to run initdb: !128
 Please see the logfile in 'C:\program
 Files\PostgreSQL\8.2beta1\tmp\initdb.log'.
 Note! You must read/copy this logfile before yuo click OK, or 
 it will be automatically removed.
 
 (This message is handwritten and may contains some typos 
 since Ctrl+C does not work in error message box).
 
 I leaved the message box in screen  and looked into 
 C:\Program Files\PostgreSQL\8.2beta1\tmp directory.
 This directory was empty.
 
 How to install 8.2 from msi file with cluster initialization ?

When this happens, can you run initdb manually? I mean while the error
message is up there? Just to show where the problem is.

Also, does it work if you pick a different locale/encoding? (it really
shouldn't make this kind of error, but worth checking)

//Magnus

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] How to get joins to work

2006-10-24 Thread Martijn van Oosterhout
On Tue, Oct 24, 2006 at 02:43:07PM -0700, Bill Ewing wrote:
 I am having trouble getting joins to work.  In a Java app that uses Hibernate 
 3.1, I am able to build queries that join two, three or more tables using 
 combinations of INNER JOIN,  LEFT JOIN or RIGHT JOIN.  But, I need FULL OUTER 
 JOIN to work and have not been able to get them to work in Hibernate.
 
 So I decided to go back to basics and practice trial joins in the PgAdminIII 
 Query tool (v1.4.1, Dec 05).
 
 Just to warm up, I did the following simple queries which all worked:
   select * FROM rack r 
   select * FROM sample s
 
 The above two tables are linked.  But, none of the following SQL worked:
   select * FROM rack r JOIN sample s
   select * FROM rack r INNER JOIN sample s

These statements are incomplete. You need to say what you're joining
on. For example:

select * FROM rack r JOIN sample s USING (joinfield)

or 
select * FROM rack r JOIN sample s ON (r.a = s.b);

If you really don't want any constraints, use a comma, or a cross join

Have a nice day,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 From each according to his ability. To each according to his ability to 
 litigate.


signature.asc
Description: Digital signature


Re: [GENERAL] benchmark suite

2006-10-24 Thread Andrew Sullivan
On Tue, Oct 24, 2006 at 12:57:08PM -0400, Ray Stell wrote:
 
 I find the following comment in dbt2 users guide: The test kit
 currently only supports SAP DB but work is currently being done to
 support PostgresSQL.  In the source tree of dbt2-0.39 has the file
 ./README-POSTGRESQL.  Is this the entry point doc that a postgresql 
 user should start with to begin to use this product?

I'm pretty sure that the docs are what are out of date here, if for
no other reason that this is in the release notes:

v0.38

- Various fixes with all scripts.
- Improved configure.ac to use pg_config for PostgreSQL.
- Updated PostgreSQL scripts to work with PostgreSQL 8.1 and newer.
- Updated PostgreSQL stored functions for 8.1 and newer.

I'd give it a whirl and see what happens; I haven't tried that one in
a while.  People who know may more about it than I do are around
here, though, and probably can give you a better answer.

A
-- 
Andrew Sullivan  | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what 
you told them to.  That actually seems sort of quaint now.
--J.D. Baldwin

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] [pgsql-es-ayuda] identificar usuario con registro en una tabla

2006-10-24 Thread Alejandro D. Burne
2006/10/24, Milton Galo Patricio [EMAIL PROTECTED]:
[EMAIL PROTECTED]:He buscado en la documentación (de forma parcial) y no he podidoencontrar alguna sentencia o comando que me pueda retornar el nombredel usuario que realizo un insert en una determinada tabla (pero de
forma historica), me explico:1.- En el momento que se ingresa un registro se que puedo saber quienlo está haciendo (a través de un trigger o simplemente consultando lasession activa).2.- Pero en una tabla cuando realizo un select como puedo saber que
usuarios han ingresado cada uno de los registros?Lo anterior se quiere saber para realizar consultas de formahistoricalo que he estado pensando es hacer un trigger y que sedispare despues de haber ingresado la tupla y guarde en esa misma
tupla el nombre del usuario (pero me gustaría saber si se puede hacerde la forma que estoy planteando en el punto dos, es algo parecido alos ficheros de linux, cuando se crean quedan con la identificacióndel propietario).
Espero haberme explicado, saludos--Milton Inostroza AguileraUna alternativa es que agregues un campo tipo char o varchar con valor por defecto USER para los insert y un trigger para los update; alternativamente otro datetime para registrar fecha y hora de las modificaciones.
AlejandroAlejandro


Re: [GENERAL] [pgsql-advocacy] Call for Donations

2006-10-24 Thread Josh Berkus
Community,

Please note that we are also planning on having a donor listing somewhere 
on the postgresql.org web site Real Soon Now.  We're just being held up at 
this point by the necessity of developing the technology and donor 
accounting.

-- 
--Josh

Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] [pgsql-advocacy] Call for Donations

2006-10-24 Thread Joshua D. Drake
Josh Berkus wrote:
 Community,
 
 Please note that we are also planning on having a donor listing somewhere 
 on the postgresql.org web site Real Soon Now.  We're just being held up at 
 this point by the necessity of developing the technology and donor 
 accounting.

Good point Josh! :). Yes we will be making all appropriate public thanks
as soon as we are able.

Sincerely,

Joshua D. Drake




-- 

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to get joins to work

2006-10-24 Thread Michael Glaesemann


On Oct 25, 2006, at 6:43 , Bill Ewing wrote:

The above two tables are linked.  But, none of the following SQL  
worked:

  select * FROM rack r JOIN sample s
  select * FROM rack r INNER JOIN sample s


In each case I get a message ERROR:  syntax error at end of input  
at character X where X is the last character in the statement.


What am I doing wrong?


Unless you're using NATURAL JOIN, you need to specify the join  
condition using a USING or ON clause, e.g.,


SELECT *
FROM rack r
JOIN sample s USING (rack_id)

or

SELECT *
FROM rack r
JOIN sample s ON (r.rack_id = s.rack_id)

That should do it.

Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] SQL injection in a ~ or LIKE statement

2006-10-24 Thread Harald Armin Massa
psycopg2 supports parameters which are escaped properly.adding: Judging from the mails of Frederico, developer of psycopg2, he was also in the early notify circle of the 
8.13-8.14 escaping improvement. So, if done correctly the DB API way, all escaping with psycopg2 is fine.Harald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaReinsburgstraße 202b
70197 Stuttgart0173/9409607-Python: the only language with more web frameworks than keywords.