Re: [GENERAL] Group by bug?

2012-12-27 Thread wd
Oh, I see, thanks for your quick reply.


On Fri, Dec 28, 2012 at 3:47 PM, Jov  wrote:

>
>
> 2012/12/28 wd 
>
>> hi,
>>
>> wd_test=# \d t1
>>  Table "public.t1"
>>  Column |  Type   |Modifiers
>> +-+-
>>  id | integer | not null default nextval('t1_id_seq'::regclass)
>>  tag| text|
>>
>> wd_test=# select * from t1;
>>  id | tag
>> +-
>>   1 | a
>>   2 | a
>>   3 | b
>>   4 | c
>>   5 | b
>> (5 rows)
>>
>>  this sql will not group by the case result.
>> wd_test=# select case t1.tag when 'a' then '1' else '0' end as tag,
>> count(*) from t1 group by tag;
>>
>
> here the group by key tag is t1.tag,not the tag int the select list
>
>
>>  tag | count
>> -+---
>>  0   | 1
>>  0   | 2
>>  1   | 2
>> (3 rows)
>>
>>  this sql will group by the case result.
>> wd_test=# select case t1.tag when 'a' then '1' else '0' end as ttag,
>> count(*) from t1 group by ttag;
>>
> here the ttag is the select list ttag,it is equal with group by 1.
>
>
>>   ttag | count
>> --+---
>>  0| 3
>>  1| 2
>> (2 rows)
>>
>
> http://www.postgresql.org/docs/9.2/static/sql-select.html#SQL-GROUPBY
>
>
>> GROUP BY will condense into a single row all selected rows that share
>> the same values for the grouped expressions. expression can be an input
>> column name, or the name or ordinal number of an output column (SELECT list
>> item), or an arbitrary expression formed from input-column values. *In
>> case of ambiguity, a GROUP BY name will be interpreted as an
>> input-column name rather than an output column name.*
>
>
> so it is not a bug.
>


Re: [GENERAL] Group by bug?

2012-12-27 Thread Jov
2012/12/28 wd 

> hi,
>
> wd_test=# \d t1
>  Table "public.t1"
>  Column |  Type   |Modifiers
> +-+-
>  id | integer | not null default nextval('t1_id_seq'::regclass)
>  tag| text|
>
> wd_test=# select * from t1;
>  id | tag
> +-
>   1 | a
>   2 | a
>   3 | b
>   4 | c
>   5 | b
> (5 rows)
>
>  this sql will not group by the case result.
> wd_test=# select case t1.tag when 'a' then '1' else '0' end as tag,
> count(*) from t1 group by tag;
>

here the group by key tag is t1.tag,not the tag int the select list


>  tag | count
> -+---
>  0   | 1
>  0   | 2
>  1   | 2
> (3 rows)
>
>  this sql will group by the case result.
> wd_test=# select case t1.tag when 'a' then '1' else '0' end as ttag,
> count(*) from t1 group by ttag;
>
here the ttag is the select list ttag,it is equal with group by 1.


>   ttag | count
> --+---
>  0| 3
>  1| 2
> (2 rows)
>

http://www.postgresql.org/docs/9.2/static/sql-select.html#SQL-GROUPBY


> GROUP BY will condense into a single row all selected rows that share the
> same values for the grouped expressions. expression can be an input
> column name, or the name or ordinal number of an output column (SELECT list
> item), or an arbitrary expression formed from input-column values. *In
> case of ambiguity, a GROUP BY name will be interpreted as an input-column
> name rather than an output column name.*


so it is not a bug.


Re: [GENERAL] Group by bug?

2012-12-27 Thread wd
Sorry, forget to say,

 PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6
20110731 (Red Hat 4.4.6-3), 64-bit

 psql (9.2.2)


On Fri, Dec 28, 2012 at 3:24 PM, wd  wrote:

> hi,
>
> wd_test=# \d t1
>  Table "public.t1"
>  Column |  Type   |Modifiers
> +-+-
>  id | integer | not null default nextval('t1_id_seq'::regclass)
>  tag| text|
>
> wd_test=# select * from t1;
>  id | tag
> +-
>   1 | a
>   2 | a
>   3 | b
>   4 | c
>   5 | b
> (5 rows)
>
>  this sql will not group by the case result.
> wd_test=# select case t1.tag when 'a' then '1' else '0' end as tag,
> count(*) from t1 group by tag;
>  tag | count
> -+---
>  0   | 1
>  0   | 2
>  1   | 2
> (3 rows)
>
>  this sql will group by the case result.
> wd_test=# select case t1.tag when 'a' then '1' else '0' end as ttag,
> count(*) from t1 group by ttag;
>  ttag | count
> --+---
>  0| 3
>  1| 2
> (2 rows)
>


[GENERAL] Group by bug?

2012-12-27 Thread wd
hi,

wd_test=# \d t1
 Table "public.t1"
 Column |  Type   |Modifiers
+-+-
 id | integer | not null default nextval('t1_id_seq'::regclass)
 tag| text|

wd_test=# select * from t1;
 id | tag
+-
  1 | a
  2 | a
  3 | b
  4 | c
  5 | b
(5 rows)

 this sql will not group by the case result.
wd_test=# select case t1.tag when 'a' then '1' else '0' end as tag,
count(*) from t1 group by tag;
 tag | count
-+---
 0   | 1
 0   | 2
 1   | 2
(3 rows)

 this sql will group by the case result.
wd_test=# select case t1.tag when 'a' then '1' else '0' end as ttag,
count(*) from t1 group by ttag;
 ttag | count
--+---
 0| 3
 1| 2
(2 rows)


Re: [GENERAL] update from a csv file?

2012-12-27 Thread François Beausoleil

Le 2012-12-27 à 09:54, Kirk Wythers a écrit :

> I have been using COPY FROM to do a mass import of records from CSV files 
> into a new database. I have discover however, a small number of records ( a 
> few thousand) in one of the files that contain new data that needs to be 
> added to the database, but on rows that have a primary key and have already 
> been inserted (so I can't use COPY FROM because it violates the primary key). 
> 
> If the structure of the table is 
> 
> iddata1   data2   data3
> 
> and the structure of the CSV file is 
> 
> iddata1   data2   data3
> 
> and I need to update all the rows in data3 where the id = id.
> 
> Is this a job for the UPDATE command? or is there a better way to pull data 
> from a CSV file in order to do a mass update?

You will want to COPY FROM on a new table that has the same structure:

BEGIN;
CREATE TEMPORARY TABLE original_table_name_temp( LIKE original_table_name 
INCLUDING ALL );
COPY original_table_name_temp FROM stdin;
-- If there are many thousands of rows
ANALYZE original_table_name_temp;
UPDATE original_table_name o
  SET data3 = t.data3
  FROM original_table_name_temp t
  WHERE o.id = t.id;
COMMIT;

http://www.postgresql.org/docs/current/static/sql-update.html

You may also want to investigate the ON COMMIT option for CREATE TABLE: 
http://www.postgresql.org/docs/current/static/sql-createtable.html

Good luck!
François

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] update table from csv file

2012-12-27 Thread Craig Ringer
On 12/28/2012 12:31 AM, Kirk Wythers wrote:
> I have been using COPY FROM to do a mass import of records from CSV files 
> into a new database. I have discover however, a small number of records ( a 
> few thousand) in one of the files that contain new data that needs to be 
> added to the database, but on rows that have a primary key and have already 
> been inserted (so I can't use COPY FROM because it violates the primary key). 
The standard solution is:

- COPY to a temporary table; then
- Use UPDATE ... FROM to merge the data from the temp table into the
main table, usually after locking the main table
> I have created a temporary table and used COPY FROM to load the update data 
> into the temporary table. I seem to be stuck however. I thought I should be 
> able to use the UPDATE command to update all columns and all rows the 
> table.id = tmp_table.id
>
> Something like:
>
> UPDATE table FROM tmp_table WHERE table.id = tmp_table.id;
That's the usual solution, yes.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cursor fetch Problem.

2012-12-27 Thread Harry
Below is the Linux ps -ef | grep postgres output :-

501  12163  5473  0 Dec19 ?00:00:00 postgres: enterprisedb
sampledb 192.168.0.231[53991] ?EDB-SPL Procedure successfully completed
501  12167  5473  0 Dec19 ?00:00:00 postgres: enterprisedb
sampledb 192.168.0.231[53995] ?EDB-SPL Procedure successfully completed.

Also, if i try to kill from OS the whole database gets shut down.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Cursor-fetch-Problem-tp5737915p5737997.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] update from a csv file?

2012-12-27 Thread Kirk Wythers
I have been using COPY FROM to do a mass import of records from CSV files into 
a new database. I have discover however, a small number of records ( a few 
thousand) in one of the files that contain new data that needs to be added to 
the database, but on rows that have a primary key and have already been 
inserted (so I can't use COPY FROM because it violates the primary key). 

If the structure of the table is 

id  data1   data2   data3

and the structure of the CSV file is 

id  data1   data2   data3

and I need to update all the rows in data3 where the id = id.

Is this a job for the UPDATE command? or is there a better way to pull data 
from a CSV file in order to do a mass update?

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dirtyread doesnt work

2012-12-27 Thread Alejandro Carrillo
Hi,

After of very tried to compile this PostgreSQL C function for Windows, I 
compile that (with VS C++ 2008), but the function get a error when try to read 
a deleted row. The example:

CREATE FUNCTION pg_dirtyread(oid)
RETURNS setof record
AS E'$libdir/pg_dirtyread', 'pg_finfo_pg_dirtyread' LANGUAGE C STRICT ; 

Create table hola(
id bigserial,
dato1 varchar(199) not null,
fecha date
);


/*insert 3 rows and delete 1 row*/



select * from pg_dirtyread('hola'::regclass) t (id bigint,dato1 varchar(199), 
fecha date);

ERROR:  invalid memory alloc request size 1850015748

Anybody can help me?

Thanks


Re: [GENERAL] libpq thread safety

2012-12-27 Thread Mark Morgan Lloyd

Tom Lane wrote:

Mark Morgan Lloyd  writes:
Do any special precautions need to be taken when PQNotifies is being 
called, to make sure that nothing else is referencing the handle?


It's pretty much the same as any other operation on a PGconn: if there
could be more than one thread touching the connection object
concurrently, you'd be well advised to add some application-level
locking.

http://www.postgresql.org/docs/9.2/static/libpq-threading.html

The lack of any such locking inside libpq is partly historical, and
partly because in many practical situations you'll need application-side
locks anyway to protect application data structures associated with the
connection.


Thanks, Tom. I'm fairly happy with the ways I've used it so far, but I'm 
just trying to think ahead for the future.


In the case of Delphi/Lazarus, where you can have multiple queries on 
top of the same connection object, my experience so far is that using 
the connection object's handle is safe. But I think that to be 
absolutely confident of that I need to do some tracing, and find out 
under what circumstance calls are being issued directly against that 
handle rather than it just being a placeholder for authentication etc.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] progress of long running operation

2012-12-27 Thread Scott Ribe
On Dec 27, 2012, at 12:46 PM, Tom Lane wrote:

> Or you could run contrib/pgstattuple's pgstattuple() function every so
> often --- it will report the uncommitted tuples as "dead", which is
> inaccurate, but you'd be able to see how fast the number is increasing.

That's exactly the kind of thing I was hoping for. I'm actually inserting into 
an empty table, so "dead" tuples would be dead accurate in my case ;-)

Or I could suck it up and do them in batches instead of one giant pass...

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] progress of long running operation

2012-12-27 Thread Tom Lane
Scott Ribe  writes:
> Is there any way to get some insight into the progress of:
> insert into foo select distinct on (...) from bar where...

Watching the physical size of the foo table might be close enough.
Or you could run contrib/pgstattuple's pgstattuple() function every so
often --- it will report the uncommitted tuples as "dead", which is
inaccurate, but you'd be able to see how fast the number is increasing.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] progress of long running operation

2012-12-27 Thread Scott Ribe
Is there any way to get some insight into the progress of:

insert into foo select distinct on (...) from bar where...

It's got to with importing some legacy data, which has no proper primary key, 
and duplicates, and garbage that won't be accepted. And there's 30,000,000 
rows, and I'm running on a slow disk for testing--so I know this is going to be 
painfully slow. But after a few hours I'd like to know if it's going to finish 
overnight, or if it will take so long that I need to look at alternate 
approaches.

(I upped my shared buffers & work mem, so explain on the select statement shows 
a bit better than 50% reduction in predicted work for that part. And I will go 
ahead and drop all indexes on the target table.)

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq thread safety

2012-12-27 Thread Tom Lane
Mark Morgan Lloyd  writes:
> Do any special precautions need to be taken when PQNotifies is being 
> called, to make sure that nothing else is referencing the handle?

It's pretty much the same as any other operation on a PGconn: if there
could be more than one thread touching the connection object
concurrently, you'd be well advised to add some application-level
locking.

http://www.postgresql.org/docs/9.2/static/libpq-threading.html

The lack of any such locking inside libpq is partly historical, and
partly because in many practical situations you'll need application-side
locks anyway to protect application data structures associated with the
connection.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] libpq thread safety

2012-12-27 Thread Mark Morgan Lloyd
Do any special precautions need to be taken when PQNotifies is being 
called, to make sure that nothing else is referencing the handle?


The sort of nightmare scenario I'm thinking about is when a background 
thread is periodically pulling data from a table into a buffer, but a 
foreground (GUI) timer is asynchronously polling for notifications.


--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] update table from a csv file

2012-12-27 Thread Adrian Klaver

On 12/27/2012 08:50 AM, Kirk Wythers wrote:


On Dec 27, 2012, at 10:39 AM, Adrian Klaver mailto:adrian.kla...@gmail.com>> wrote:


No. Some questions though.


Thanks for the reply Adrian.



What version pf Postgres?


9.1


Is that the actual UPDATE statement, I see no SET?


I was reading the docs but obviously don't understand the syntax of the
update statement.


Have you tried it?
If so and it failed what was the error?


Yes and I got an error at or near from. Like this:

b4warmed3=# UPDATE sixty_min FROM tmp_60 WHERE sixty_min.rowid =
tmp_60.rowid;
ERROR:  syntax error at or near "FROM"
LINE 1: UPDATE sixty_min FROM tmp_60 WHERE sixty_min.rowid = tmp_60
  ^
b4warmed3=#


It is not enough to match the rows through "sixty_min.rowid = tmp_60."
You also need to match the columns using SET.
Per the examples at the bottom of:

http://www.postgresql.org/docs/9.2/interactive/sql-update.html

UPDATE employees SET sales_count = sales_count + 1 FROM accounts
  WHERE accounts.name = 'Acme Corporation'
  AND employees.id = accounts.sales_person;

For many columns it is easier to use the other form of SET. Example from 
docs:


UPDATE weather SET (temp_lo, temp_hi, prcp) = (temp_lo+1, temp_lo+15, 
DEFAULT)








--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] update table from a csv file

2012-12-27 Thread Kirk Wythers

On Dec 27, 2012, at 10:39 AM, Adrian Klaver  wrote:

> No. Some questions though.

Thanks for the reply Adrian. 

> 
> What version pf Postgres?

9.1

> Is that the actual UPDATE statement, I see no SET?

I was reading the docs but obviously don't understand the syntax of the update 
statement. 

> Have you tried it?
> If so and it failed what was the error?

Yes and I got an error at or near from. Like this:

b4warmed3=# UPDATE sixty_min FROM tmp_60 WHERE sixty_min.rowid = tmp_60.rowid;
ERROR:  syntax error at or near "FROM"
LINE 1: UPDATE sixty_min FROM tmp_60 WHERE sixty_min.rowid = tmp_60
 ^
b4warmed3=# 



Re: [GENERAL] New Zealand Postgis DBA job vacancy

2012-12-27 Thread Bexley Hall

Hi Martin,

On 12/27/2012 8:31 AM, Martin Gainty wrote:


so...why doesn't Postgres port to embedded systems?


IME, it requires lots of resources (the vast majority of embedded
systems are resource starved -- resources == $$ and when you are
selling things in volume, every penny saved adds up quickly!).
Lots of MIPS, lots of RAM -- even the code footprint is "significant".

OTOH, (again, IME) designing with the "relational table" construct
makes coding a very different experience!  Already being biased
in favor of table-driven algorithms, I took this opportunity to
move all the "const" tables out of my executables and into the
DBMS (which takes a performance hit but keeps the code much more
mutable).  I've gone so far as to hide the filesystem from the
applications -- objects that would have typically resided in
ad hoc files are now stored in structured tables (eliminates
the need to write lots of special parsers to be able to impose
structure on what would otherwise be unstructured "bytes")



so do you reduce CPU or IO when you take a java variable final int
foo=1; and insert foo as a const column in a table?


[For the most part, I don't use Java as most of my projects have
real-time constraints and the lack of determinism in Java blows
that out of the water]

My (original) comment above is meant to address taking tables
of constants out of code and moving them into the DBMS -- *fetching*
them at run-time and using them in their normal role AS IF they
had been hard-coded into the executable.

E.g., given a Julian day, most folks would convert it to a (month,day)
tuple using a table like:

days[] = {
  31,   /* Jan */
  28,   /* Feb */
  31,   /* Mar */
  30,   /* Apr */
...
  31/* Dec */
}

Instead of putting this table in the code as a const datum, I would
store it in the DBMS and retrieve it as/when needed for the conversion.

[This is a silly example but one that is easy to understand]

Among other things, it allows me to change the contents of the
table without having to release a new executable

[Again, this example is silly in this regard as January will *always*
have 31 days so there will never be a need to change that!  OTOH, if
the table is used to determine how far to advance the timing of the
spark in an internal combustion engine with respect to engine RPM's,
then you *may* want to change/tweek that table at some time in the
future to obtain better performance or accommodate changes in the
manufacture of the engine -- without changing all the code that
*references* that table.  If that table is embedded in the actual
executable, this isn't very easy.]

As far as my parsing comment is concerned...

The structure that a table can impart to the data that it represents
allows you to identify that data *once*, stuff it into the appropriate
fields and then never have to parse the raw data object again.

E.g., early email readers maintained the user's "mailbox" as a simple
ASCII text file.  New messages were appended to the end of the file.
When it came time to enumerate the messages or find a particular
message, the program had to parse the ASCII text sequentially knowing
the form that a message took so that it could identify the start
(and end) of each individual message.

If you were handed that ASCII file and asked "How many messages does
this contain", you would have to parse the file sequentially, knowing
the format of mail messages (in detail!) and count them.

If, instead, you have a table called "mail" with fields called
"From", "To", "Date", "Body", etc. then it's easy to determine
how many messages you have (count) or locate the most recent message
from "Bob", etc.  The application can concentrate on the service(s)
that it wants to provide instead of having to be encumbered with
the trivialities of parsing message formats.

[Parsers are rife with opportunity for coding errors -- especially
those coded ad hoc!]

--don


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] update table from a csv file

2012-12-27 Thread Adrian Klaver

On 12/27/2012 08:27 AM, Kirk Wythers wrote:

I have been using COPY FROM to do a mass import of records from CSV files into 
a new database. I have discover however, a small number of records ( a few 
thousand) in one of the files that contain new data that needs to be added to 
the database, but on rows that have a primary key and have already been 
inserted (so I can't use COPY FROM because it violates the primary key).

If the structure of the table is

id  data1   data2   data3

and the structure of the CSV file is

id  data1   data2   data3

and I need to update all the rows in data3 where the id = id.

I have created a temporary table and used COPY FROM to load the update data 
into the temporary table. I seem to be stuck however. I thought I should be 
able to use the UPDATE command to update all columns and all rows the table.id 
= tmp_table.id

Something like:

UPDATE table FROM tmp_table WHERE table.id = tmp_table.id;

Or am I completely off course?


No. Some questions though.

What version pf Postgres?
Is that the actual UPDATE statement, I see no SET?
Have you tried it?
If so and it failed what was the error?






--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] update table from csv file

2012-12-27 Thread Kirk Wythers
I have been using COPY FROM to do a mass import of records from CSV files into 
a new database. I have discover however, a small number of records ( a few 
thousand) in one of the files that contain new data that needs to be added to 
the database, but on rows that have a primary key and have already been 
inserted (so I can't use COPY FROM because it violates the primary key). 

If the structure of the table is 

id  data1   data2   data3

and the structure of the CSV file is 

id  data1   data2   data3

and I need to update all the rows in data3 where the id = id.

I have created a temporary table and used COPY FROM to load the update data 
into the temporary table. I seem to be stuck however. I thought I should be 
able to use the UPDATE command to update all columns and all rows the table.id 
= tmp_table.id

Something like:

UPDATE table FROM tmp_table WHERE table.id = tmp_table.id;

Or am I completely off course?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] update table from a csv file

2012-12-27 Thread Kirk Wythers
I have been using COPY FROM to do a mass import of records from CSV files into 
a new database. I have discover however, a small number of records ( a few 
thousand) in one of the files that contain new data that needs to be added to 
the database, but on rows that have a primary key and have already been 
inserted (so I can't use COPY FROM because it violates the primary key). 

If the structure of the table is 

id  data1   data2   data3

and the structure of the CSV file is 

id  data1   data2   data3

and I need to update all the rows in data3 where the id = id.

I have created a temporary table and used COPY FROM to load the update data 
into the temporary table. I seem to be stuck however. I thought I should be 
able to use the UPDATE command to update all columns and all rows the table.id 
= tmp_table.id

Something like:

UPDATE table FROM tmp_table WHERE table.id = tmp_table.id;

Or am I completely off course?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] New Zealand Postgis DBA job vacancy

2012-12-27 Thread Martin Gainty

> From: bexley...@yahoo.com
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] New Zealand Postgis DBA job vacancy
> > 
> Thinking (entirely) *in* metric doesn't.  The problem is working
> with *both*, simultaneously, requires some mental agility.
> 
> Nearby, we have one of the few (only?) stretches of roadway that
> is marked in metric units (actually, I haven't driven it in a while
> and vaguely recall something about RE-marking it in "conventional"
> units).  To most folks, it is a disturbing experience as they aren't
> accustomed to thinking in these.  ("No, that's not 100MPH but
> 100kmph... big difference!")MG>Posted Speed limit in the mahority of limited 
> access highways in SouthAmerica is 100
MG>unless of course ...you're in a hurry 
> 
> 
> > so...why doesn't Postgres port to embedded systems?
> 
> IME, it requires lots of resources (the vast majority of embedded
> systems are resource starved -- resources == $$ and when you are
> selling things in volume, every penny saved adds up quickly!).
> Lots of MIPS, lots of RAM -- even the code footprint is "significant".
> 
> OTOH, (again, IME) designing with the "relational table" construct
> makes coding a very different experience!  Already being biased
> in favor of table-driven algorithms, I took this opportunity to
> move all the "const" tables out of my executables and into the
> DBMS (which takes a performance hit but keeps the code much more
> mutable).  I've gone so far as to hide the filesystem from the
> applications -- objects that would have typically resided in
> ad hoc files are now stored in structured tables (eliminates
> the need to write lots of special parsers to be able to impose
> structure on what would otherwise be unstructured "bytes")
> MG>so do you reduce CPU or IO when you take a java variable final int foo=1; 
> and insert foo as a const column in a table?
MG>BTW Spring IOC has addressed this issue with lazy-init="true" (dont 
initialise the variable until referenced at runtime)
> 
> --donMG>Martin
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
  

Re: [GENERAL] Cursor fetch Problem.

2012-12-27 Thread Amit Kapila
On Thursday, December 27, 2012 11:51 AM Harry wrote:
> Hi Amit,
> Thanks for Reply.
> Kindly see my below output.
> 
> Also, tried to Kill it Firstly by using Cancel Backend and then
> Terminate
> Backend output showing "True" but still remaining as a process (i.e. in
> pg_stat_activity).

Can you check the server log and see if there is any of below the statements
in the log:

FATAL:  terminating connection due to administrator command
ERROR:  canceling statement due to user request

With Regards,
Amit Kapila.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cursor fetch Problem.

2012-12-27 Thread Amit Kapila
On Thursday, December 27, 2012 11:51 AM Harry wrote:
> Hi Amit,
> Thanks for Reply.
> Kindly see my below output.
> 16650;"sampledb";11965;10;"enterprisedb";"";"192.168.0.231";"";53897;"*
> 2012-12-19
> 11:39:48.234799+05:30";"2012-12-19 11:39:53.288441+05:30";"2012-12-19
> 11:39:53.288441+05:30*";f;"DECLARE
> BEGIN
> EXEC
> 16650;"sampledb";12156;10;"enterprisedb";"";"192.168.0.231";"";53983;*"
> 2012-12-19
> 12:18:38.57709+05:30";"2012-12-19 12:18:43.922301+05:30";"2012-12-19
> 12:18:43.922301+05:30"*;f;"DECLARE
> BEGIN
> EXEC
> 16650;"sampledb";13243;10;"enterprisedb";"Postgres Studio -
> Browser";"192.168.0.180";"";3907;"2012-12-26
> 16:35:45.753172+05:30";"";"2012-12-26 16:35:46.577723+05:30";f;""

Above shows that first two sessions are running from last few days. 
I am interested to know what is the transaction state in first 2 sessions.
In current version that information is part of pg_stat_activity, but don't
know how to get in the version you are using.
If possible for you, get this information. If you are using Linux system the
try ps ax | grep postgres and show the output

 
> Also, tried to Kill it Firstly by using Cancel Backend and then
> Terminate
> Backend output showing "True" but still remaining as a process (i.e. in
> pg_stat_activity).

Are you aware whether there is actually such long query running in first 2
sessions.
If you are not interested in first 2 sessions, you can even use OS kill
command.

With Regards,
Amit Kapila.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general