Re: [GENERAL] Load Image File From PostgreSQL DB

2008-12-29 Thread Julius Tuskenis

Hi, dbalinglung.

You are probably confusing types oid and lo or bytea. There's no way you 
could put a image into oid column. I'm using bytea type without problems.
Also check "bytea as lo" checkbox in ODBC dialog (if you use ODBC). Also 
TDBImage is no good if you want to use jpg files. I found an alternative 
- EDBImage.


Hope that helps. Good luck.

Julius Tuskenis



dbalinglung rašė:

Dear Expert,
 
 
I'm sorry for my newbie question, I have a dificulty to load image 
file from field OID type on PostgreSQL with Borland Delphi 7.
 
I would like to showing image file with TDBImage component to Delphi 
Form, but the image file can't display, any sugestion for me about how 
can i do to show the image file with Borland Delphi 7 and PostgreSQL 
8.2.x. The image file saving on PostgreSQL 8.2.X table with OID Field 
Type.
 
 
Many thanks and Regards,
 
 
*Alam Surya*
 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Load Image File From PostgreSQL DB

2008-12-29 Thread dbalinglung
Dear Expert,


I'm sorry for my newbie question, I have a dificulty to load image file from 
field OID type on PostgreSQL with Borland Delphi 7.

I would like to showing image file with TDBImage component to Delphi Form, but 
the image file can't display, any sugestion for me about how can i do to show 
the image file with Borland Delphi 7 and PostgreSQL 8.2.x. The image file 
saving on PostgreSQL 8.2.X table with OID Field Type.


Many thanks and Regards,


Alam Surya


Re: [GENERAL] Query planner and foreign key constraints

2008-12-29 Thread Filip Rembiałkowski
2008/12/30 Christian Schröder 

> Hi list,
> in our PostgreSQL 8.2.9 database I have these tables:
>
>   create table table1 (
>   key1 char(12),
>   key2 integer,
>   primary key (key1, key2)
>   );
>
>   create table table2 (
>   key1 char(12),
>   key2 integer,
>   key3 varchar(20),
>   primary key (key1, key2, key3),
>   foreign key (key1, key2) references table1 (key1, key2)
>   );
>
> Table1 has 896576 rows. Table2 has 1630788 rows. The statistics target of
> the columns key1 and key2 in both tables has been set to 1000. Both tables
> have been analyzed.
> When I join both tables using key1 and key2 there will be exactly 1630788
> rows because for each row in table2 there *must* exist a row in table1. But
> the query planner doesn't think so:
>
> # explain analyze select * from table1 inner join table2 using (key1,
> key2);
> QUERY PLAN
>
> --
> Merge Join  (cost=0.00..94916.58 rows=39560 width=44) (actual
> time=0.103..7105.960 rows=1630788 loops=1)
>  Merge Cond: ((table1.key1 = table2.key1) AND (table1.key2 = table2.key2))
>  ->  Index Scan using table1_pkey on table1  (cost=0.00..22677.65
> rows=896576 width=20) (actual time=0.038..689.059 rows=896576 loops=1)
>  ->  Index Scan using table2_pkey on table2  (cost=0.00..59213.16
> rows=1630788 width=44) (actual time=0.049..1108.220 rows=1630788 loops=1)
> Total runtime: 7525.492 ms
> (5 rows)
>
> You can also find the query plan at
> http://explain-analyze.info/query_plans/2648-query-plan-1371.
>
> What can I do to make the query planner realize that the join will have
> 1630788 rows? This join is part of a view which I then use in other joins
> and this wrong assumption leads to really bad performance.
>

just a guess, but - did you try to declare NOT NULL on FK columns?

your assumption that "for each row in table2 there *must* exist a row in
table1" will be enforced then.

maybe the planner will make use of this ...


-- 
Filip Rembiałkowski


[GENERAL] Query planner and foreign key constraints

2008-12-29 Thread Christian Schröder

Hi list,
in our PostgreSQL 8.2.9 database I have these tables:

   create table table1 (
   key1 char(12),
   key2 integer,
   primary key (key1, key2)
   );

   create table table2 (
   key1 char(12),
   key2 integer,
   key3 varchar(20),
   primary key (key1, key2, key3),
   foreign key (key1, key2) references table1 (key1, key2)
   );

Table1 has 896576 rows. Table2 has 1630788 rows. The statistics target 
of the columns key1 and key2 in both tables has been set to 1000. Both 
tables have been analyzed.
When I join both tables using key1 and key2 there will be exactly 
1630788 rows because for each row in table2 there *must* exist a row in 
table1. But the query planner doesn't think so:


# explain analyze select * from table1 inner join table2 using (key1, key2);
 QUERY PLAN
--
Merge Join  (cost=0.00..94916.58 rows=39560 width=44) (actual 
time=0.103..7105.960 rows=1630788 loops=1)

  Merge Cond: ((table1.key1 = table2.key1) AND (table1.key2 = table2.key2))
  ->  Index Scan using table1_pkey on table1  (cost=0.00..22677.65 
rows=896576 width=20) (actual time=0.038..689.059 rows=896576 loops=1)
  ->  Index Scan using table2_pkey on table2  (cost=0.00..59213.16 
rows=1630788 width=44) (actual time=0.049..1108.220 rows=1630788 loops=1)

Total runtime: 7525.492 ms
(5 rows)

You can also find the query plan at 
http://explain-analyze.info/query_plans/2648-query-plan-1371.


What can I do to make the query planner realize that the join will have 
1630788 rows? This join is part of a view which I then use in other 
joins and this wrong assumption leads to really bad performance.


Regards,
   Christian

--
Deriva GmbH Tel.: +49 551 489500-42
Financial IT and Consulting Fax:  +49 551 489500-91
Hans-Böckler-Straße 2  http://www.deriva.de
D-37079 Göttingen

Deriva CA Certificate: http://www.deriva.de/deriva-ca.cer



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How can the error log be searched?

2008-12-29 Thread Adrian Klaver
On Monday 29 December 2008 11:28:13 am Rob Richardson wrote:
> Greetings!
>
> A customer reported an error generated by our database.  However, many
> of our functions have errors in RAISE EXCEPTION statements, in which the
> % character was omitted.  The error message the customer saw was
> complaining that there were too many arguments to a RAISE statement.  We
> were able to find the function that was throwing the error by using
> PGAdmin, opening the Server Status window, going to the Log File tab,
> and finding approximately when the error occurred.
>
> But it would have been easier to find the problem if we could have
> searched the logs.  There's no search function in the log file window.
> Also, there's no apparent way to export the file.  Also, I can't even
> use copy and paste.  So, how can I export the error messages into a
> searchable text file?
>
> Thanks very much!
>
> RobR

I don't use PgAdmin so I can't help you with that part. If you want to search 
the file use ~/PGDATA/postgresql.conf to find the log directory and read the 
file directly with the text editor of you choice.

-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How can the error log be searched?

2008-12-29 Thread Rob Richardson
Greetings!

A customer reported an error generated by our database.  However, many
of our functions have errors in RAISE EXCEPTION statements, in which the
% character was omitted.  The error message the customer saw was
complaining that there were too many arguments to a RAISE statement.  We
were able to find the function that was throwing the error by using
PGAdmin, opening the Server Status window, going to the Log File tab,
and finding approximately when the error occurred.  

But it would have been easier to find the problem if we could have
searched the logs.  There's no search function in the log file window.
Also, there's no apparent way to export the file.  Also, I can't even
use copy and paste.  So, how can I export the error messages into a
searchable text file?

Thanks very much!

RobR

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] open transaction?

2008-12-29 Thread Bayless Kirtley


- Original Message - 
From: "Terry Lee Tucker" 

To: 
Sent: Monday, December 29, 2008 5:57 AM
Subject: Re: [GENERAL] open transaction?



On Monday 29 December 2008 06:48, blackwater dev wrote:

I just logged into postgres from the command line and did:

begin:

select blah;
select blah;

\q

Without thinking I closed by connection before committing or rolling back
my transaction.  Did postgres handle this for me?  How do I see if the
transaction is still open?

Thanks!


If you don't commit, it is rolled back when you exit.

--
Terry Lee Tucker
Turbo's IT Manager
Turbo, division of OHL
2251 Jesse Jewell Pkwy
Gainesville, GA 30501
tel: (336) 372-6821 cell: (336) 404-6987
te...@turbocorp.com
www.turbocorp.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


But if all you did was SELECT, as you indicated, then no transaction
should have been started and it makes no difference.

Bayless


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes

2008-12-29 Thread Scott Marlowe
On Mon, Dec 29, 2008 at 9:28 AM, Scott Ribe  wrote:
>> voila!  cache dumped.
>
> What about read caches in the disk devices themselves?

Given that most drives have caches that are in the 16 to 32Meg range,
I doubt it makes a big difference.  But you can always just dd a file
both ways that are 100Meg or so and dump it out.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes

2008-12-29 Thread Scott Ribe
> voila!  cache dumped.

What about read caches in the disk devices themselves?

-- 
Scott Ribe
scott_r...@killerbytes.com
http://www.killerbytes.com/
(303) 722-0567 voice



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes

2008-12-29 Thread Scott Marlowe
On Mon, Dec 29, 2008 at 8:36 AM, Scott Ribe  wrote:
> Creating the partial index reads rows, and the pages are left in the disk
> cache. The only way to do proper comparisons is to reboot between trials in
> order to compare queries with cold caches, or use the latter of multiple
> runs in order to compare queries with hot caches.

There are two other ways, one is to unmount and remount the partition
on which pgsql is running.  On many db servers this is possible
because pgsql gets its own disk array / mount point.  The other is to
use drop caches:

smarl...@abasin:/home/smarlowe$ cd /proc/sys/vm
smarl...@abasin:/proc/sys/vm$ free
 total   used   free sharedbuffers cached
Mem:   401630022576881758612  0 1520601106400
-/+ buffers/cache: 9992283017072
Swap:  1068280  457121022568
smarl...@abasin:/proc/sys/vm$ echo 1|sudo tee drop_caches
1
smarl...@abasin:/proc/sys/vm$ free
 total   used   free sharedbuffers cached
Mem:   401630010467882969512  0188  94628
-/+ buffers/cache: 9519723064328
Swap:  1068280  457121022568

voila!  cache dumped.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes

2008-12-29 Thread Scott Ribe
> The WHERE condition can be divided into a "slowly changing" part and in
> a "random" one. The random part is the one I change at every query to avoid
> result caching.

The first query will leave in cache at least many of the index pages needed
by the second query, and likely actual rows needed by the second query.

> Disk cache could explain the thing, but then why I got the high performances
> after the partial index has been created? By chance?

Creating the partial index reads rows, and the pages are left in the disk
cache. The only way to do proper comparisons is to reboot between trials in
order to compare queries with cold caches, or use the latter of multiple
runs in order to compare queries with hot caches.

-- 
Scott Ribe
scott_r...@killerbytes.com
http://www.killerbytes.com/
(303) 722-0567 voice



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes

2008-12-29 Thread Scott Marlowe
On Mon, Dec 29, 2008 at 7:41 AM, Reg Me Please  wrote:
> Hi.
>
> The WHERE condition can be divided into a "slowly changing" part and in
> a "random" one. The random part is the one I change at every query to avoid
> result caching.
>
> The planner seems to be smart enough to "learn" while working but then
> I should see a change in the EXPLAIN output, which never happens.
>
> I also tried to restart PostgreSQL in order to force a cache flush, but
> again, once the new performances are in the don't get out!

I'm guessing that what happened is that when you created the partial
index, pgsql read the whole table in, and the OS cached it.
Subsequent accesses hit either pgsql's shared_buffers or the OS cache.

You could try unmounting and remounting the partition in addition to
restarting pgsql and see if that helps, or for sure, reboot.

Also, for the partial index to be chosen, it has to match pretty much
exactly the where clause.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes

2008-12-29 Thread Reg Me Please
Hi.

The WHERE condition can be divided into a "slowly changing" part and in
a "random" one. The random part is the one I change at every query to avoid
result caching.

The planner seems to be smart enough to "learn" while working but then
I should see a change in the EXPLAIN output, which never happens.

I also tried to restart PostgreSQL in order to force a cache flush, but
again, once the new performances are in the don't get out!

Disk cache could explain the thing, but then why I got the high performances
after the partial index has been created? By chance?

On Monday December 29 2008 15:24:33 Gauthier, Dave wrote:
> Not sure if this applies to your case, but I've seen cases where an initial
> run of a particular query is a lot slower than subsequent runs even though
> no changes were made between the two.  I suspect that the initial run did
> all the disk IO needed to get the data (slow), and that the subsequent runs
> were just reading the data out of memory (fast) as it was left over in the
> PG data buffer cache, the server's caches, the disk server's cache, etc...
> .
>
> Try the same query only with different search criteris.  IOW, force it to
> go back out to disk. You may find that the slow performance returns.
>
> Good Luck !
>
> -dave
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Reg Me Please
> Sent: Monday, December 29, 2008 9:09 AM
> To: pgsql-general@postgresql.org
> Subject: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes
>
> HI all.
>
> I have a 8M+ rows table over which I run a query with a and-only WHERE
> condition.
> The table has been periodically VACUUMed and ANALYZEd.
> In the attempt of speeding that up I added a partial index in order to
> limit the size of the index. Of course that index is modeled after a
> "slowly variable" part of the WHERE condition I have in my query.
>
> And timings actually dropped dramatically (I do know the problems with
> caching etc. and I paid attention to that) to about 1/20th (from about
> 800ms to average 40ms, actually).
> So I turned to EXPLAIN to see how the partial index was used.
> Incredibly, the partial index was not used!
> So I tried to drop the new index and incredibly the performances where
> still very good.
>
> While I can understand that the planner can decide not to use a partial
> index (despite in my mind it'd make a lot of sense), I'd like to understand
> how it comes that I get benefits from an enhancement not used!
> What'd be the explanation (if any) for this behavior?
>
> Thanks.
>
> --
> Fahrbahn ist ein graues Band
> weisse Streifen, grüner Rand



-- 
Fahrbahn ist ein graues Band
weisse Streifen, grüner Rand

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes

2008-12-29 Thread Gauthier, Dave
Not sure if this applies to your case, but I've seen cases where an initial run 
of a particular query is a lot slower than subsequent runs even though no 
changes were made between the two.  I suspect that the initial run did all the 
disk IO needed to get the data (slow), and that the subsequent runs were just 
reading the data out of memory (fast) as it was left over in the PG data buffer 
cache, the server's caches, the disk server's cache, etc... .

Try the same query only with different search criteris.  IOW, force it to go 
back out to disk. You may find that the slow performance returns.

Good Luck !

-dave  

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Reg Me Please
Sent: Monday, December 29, 2008 9:09 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] [PGSQL 8.3.5] Use of a partial indexes

HI all.

I have a 8M+ rows table over which I run a query with a and-only WHERE 
condition.
The table has been periodically VACUUMed and ANALYZEd.
In the attempt of speeding that up I added a partial index in order to limit 
the size of the index. Of course that index is modeled after a "slowly
variable" part of the WHERE condition I have in my query.

And timings actually dropped dramatically (I do know the problems with caching
etc. and I paid attention to that) to about 1/20th (from about 800ms to 
average 40ms, actually).
So I turned to EXPLAIN to see how the partial index was used.
Incredibly, the partial index was not used!
So I tried to drop the new index and incredibly the performances where still
very good.

While I can understand that the planner can decide not to use a partial index 
(despite in my mind it'd make a lot of sense), I'd like to understand how it 
comes that I get benefits from an enhancement not used!
What'd be the explanation (if any) for this behavior?

Thanks.

-- 
Fahrbahn ist ein graues Band
weisse Streifen, grüner Rand

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [PGSQL 8.3.5] Use of a partial indexes

2008-12-29 Thread Reg Me Please
HI all.

I have a 8M+ rows table over which I run a query with a and-only WHERE 
condition.
The table has been periodically VACUUMed and ANALYZEd.
In the attempt of speeding that up I added a partial index in order to limit 
the size of the index. Of course that index is modeled after a "slowly
variable" part of the WHERE condition I have in my query.

And timings actually dropped dramatically (I do know the problems with caching
etc. and I paid attention to that) to about 1/20th (from about 800ms to 
average 40ms, actually).
So I turned to EXPLAIN to see how the partial index was used.
Incredibly, the partial index was not used!
So I tried to drop the new index and incredibly the performances where still
very good.

While I can understand that the planner can decide not to use a partial index 
(despite in my mind it'd make a lot of sense), I'd like to understand how it 
comes that I get benefits from an enhancement not used!
What'd be the explanation (if any) for this behavior?

Thanks.

-- 
Fahrbahn ist ein graues Band
weisse Streifen, grüner Rand

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] open transaction?

2008-12-29 Thread Terry Lee Tucker
On Monday 29 December 2008 06:48, blackwater dev wrote:
> I just logged into postgres from the command line and did:
>
> begin:
>
> select blah;
> select blah;
>
> \q
>
> Without thinking I closed by connection before committing or rolling back
> my transaction.  Did postgres handle this for me?  How do I see if the
> transaction is still open?
>
> Thanks!

If you don't commit, it is rolled back when you exit.

-- 
Terry Lee Tucker
Turbo's IT Manager
Turbo, division of OHL
2251 Jesse Jewell Pkwy
Gainesville, GA 30501
tel: (336) 372-6821 cell: (336) 404-6987
te...@turbocorp.com
www.turbocorp.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] open transaction?

2008-12-29 Thread Pavel Stehule
Hello

maybe you have autocommit on

try

[pa...@localhost ~]$ psql postgres
psql (8.4devel)
Type "help" for help.

postgres=# \set
AUTOCOMMIT = 'on'
PROMPT1 = '%/%R%# '
PROMPT2 = '%/%R%# '

regards
Pavel Stehule

2008/12/29 Raymond O'Donnell :
> On 29/12/2008 11:53, Raymond O'Donnell wrote:
>> On 29/12/2008 11:48, blackwater dev wrote:
>>
>>> Without thinking I closed by connection before committing or rolling
>>> back my transaction.  Did postgres handle this for me?  How do I see if
>>> the transaction is still open?
>>
>> I'm reasonably sure that the transaction will get rolled back if the
>> connection dies.
>
> Just tried it, and that seems to be the case:
>
> postgres=# create database test;
> CREATE DATABASE
> postgres=# \c test
> You are now connected to database "test".
> test=# create table t(f1 integer);
> CREATE TABLE
> test=# begin;
> BEGIN
> test=# insert into t values(1);
> INSERT 0 1
> test=# insert into t values(2);
> INSERT 0 1
> test=# \q
> r...@teladesign:~$ psql -U postgres test
> Password for user postgres:
> Welcome to psql 8.3.5, the PostgreSQL interactive terminal.
>
> Type:  \copyright for distribution terms
>   \h for help with SQL commands
>   \? for help with psql commands
>   \g or terminate with semicolon to execute query
>   \q to quit
>
> test=# select * from t;
>  f1
> 
> (0 rows)
>
>
>
> Ray.
>
> --
> Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
> r...@iol.ie
> Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
> --
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] encoding of PostgreSQL messages

2008-12-29 Thread Alvaro Herrera
Karsten Hilbert wrote:
> On Tue, Dec 23, 2008 at 06:45:17PM -0300, Alvaro Herrera wrote:

> > Hmm, isn't client_encoding reported in the startup packet sent by the
> > server, after auth?
> 
> That would not quite be enough -- I am talking about
> messages reported *during* auth, say
> 
>   FATAL: password authentication failed for user "postgres"
> 
> or
> 
>   fe_sendauth: no password supplied
> 
> both of which, in other locales, may contain non-ASCII characters.

Those are sent in the server encoding IIRC (which admittedly you don't
have a way to know, at that point.)

And I'm now wondering if we should delay initializing the translation
stuff until after client_encoding has been reported.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] open transaction?

2008-12-29 Thread Raymond O'Donnell
On 29/12/2008 11:53, Raymond O'Donnell wrote:
> On 29/12/2008 11:48, blackwater dev wrote:
> 
>> Without thinking I closed by connection before committing or rolling
>> back my transaction.  Did postgres handle this for me?  How do I see if
>> the transaction is still open?
> 
> I'm reasonably sure that the transaction will get rolled back if the
> connection dies.

Just tried it, and that seems to be the case:

postgres=# create database test;
CREATE DATABASE
postgres=# \c test
You are now connected to database "test".
test=# create table t(f1 integer);
CREATE TABLE
test=# begin;
BEGIN
test=# insert into t values(1);
INSERT 0 1
test=# insert into t values(2);
INSERT 0 1
test=# \q
r...@teladesign:~$ psql -U postgres test
Password for user postgres:
Welcome to psql 8.3.5, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

test=# select * from t;
 f1

(0 rows)



Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] open transaction?

2008-12-29 Thread Raymond O'Donnell
On 29/12/2008 11:48, blackwater dev wrote:

> Without thinking I closed by connection before committing or rolling
> back my transaction.  Did postgres handle this for me?  How do I see if
> the transaction is still open?

I'm reasonably sure that the transaction will get rolled back if the
connection dies.

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] open transaction?

2008-12-29 Thread blackwater dev
I just logged into postgres from the command line and did:

begin:

select blah;
select blah;

\q

Without thinking I closed by connection before committing or rolling back my
transaction.  Did postgres handle this for me?  How do I see if the
transaction is still open?

Thanks!


Re: [GENERAL] having two database clusters?

2008-12-29 Thread Grzegorz Jaśkiewicz
and in different PGDATA dirs.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general