[GENERAL] Call for translations

2013-08-24 Thread Peter Eisentraut
In anticipation of the release of PostgreSQL 9.3, it is once again time
to update the message translations.  We are now in a string freeze, which has 
traditionally been associated with the first release candidate, so it's a 
good time to do this work now.

If you want to help, see http://babel.postgresql.org/ for 
instructions and other information.  If there are already active
translation teams, please communicate with them first.  The mailing list
pgtranslation-translat...@pgfoundry.org is available for general
discussion and coordination of translation activities.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] earthdistance

2013-08-24 Thread Bruno Wolff III

On Tue, Aug 20, 2013 at 20:38:51 +0200,
  Olivier Chaussavoine olivier.chaussavo...@gmail.com wrote:

I also look at cube extension, but the built in type box - a couple of
points - does not require any extension and has a GIST index. It can be
used to represent a rectangle on the domain [-PI/2,+PI/2[*[-PI,PI[. If the
extension was providing a function get_rect_from_cap() giving the smallest
rectangle of this domain containing a spherical cap, this rectangle could
be used as you pointed out to reduce the set of rows where the earth
distance need to be computed to know if a point A belongs to the cap. The
operator  (box overlaps box) could be used if the point A is converted to
box(A,A). Do you think this function get_rect_from_cap() could be usefull?


Depending on how the sphere is represented, getting the minimum bounding 
cube is pretty simple and it might not be worth writing a function for 
this.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] batch insertion

2013-08-24 Thread Korisk
Hi!
I want quick insert into db a lot of data (in form of triplets). Data is formed 
dynamical so COPY is not suitable.
I tried batch insert like this:

insert into triplets values (1,1,1);
insert into triplets values (1,1,1), (3,2,5), (4,5,5);
...
insert into triplets values (1,1,1), (3,2,5), (4,5,5)  ;

The more triplets I use the quicker operation is.
With preparation it looks like this:

res = PQprepare(conn, qu, insert into triplets values ($1::bigint, 
$2::bigint, $3::float);,3, NULL);
...
res = PQprepare(conn, qu, insert into triplets values ($1::bigint, 
$2::bigint, $3::float), ($4::bigint, $5::bigint, $6::float), ($7::bigint, 
$8::bigint, $9::float), ($10::bigint, $11::bigint, $12::float);,12, NULL);
...

The question:
Is there any way to prepare query with any number of triplets without casting 
such a long string?

Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] batch insertion

2013-08-24 Thread Allan Kamau
On Sun, Aug 25, 2013 at 3:15 AM, Korisk kor...@yandex.ru wrote:

 Hi!
 I want quick insert into db a lot of data (in form of triplets). Data is
 formed dynamical so COPY is not suitable.
 I tried batch insert like this:

 insert into triplets values (1,1,1);
 insert into triplets values (1,1,1), (3,2,5), (4,5,5);
 ...
 insert into triplets values (1,1,1), (3,2,5), (4,5,5)  ;

 The more triplets I use the quicker operation is.
 With preparation it looks like this:

 res = PQprepare(conn, qu, insert into triplets values ($1::bigint,
 $2::bigint, $3::float);,3, NULL);
 ...
 res = PQprepare(conn, qu, insert into triplets values ($1::bigint,
 $2::bigint, $3::float), ($4::bigint, $5::bigint, $6::float), ($7::bigint,
 $8::bigint, $9::float), ($10::bigint, $11::bigint, $12::float);,12, NULL);
 ...

 The question:
 Is there any way to prepare query with any number of triplets without
 casting such a long string?

 Thank you.


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



You may still use COPY as follows:
Let every thread that is generating data create a file on disk (at some
predetermined directory) to which the tread would write data in a format
(such as CSV, TSV) that COPY can use.
Use some sort of counter that would be updated for each write of a record
to this file.
Then provide a counter threshold which when the value of your counter
reaches (surpasses) your thread will first generate a COPY command and send
out the contents of this file via the COPY command to your DB. Now delete
the contents of the file and write the record into it.
You may also write some code to do on demand writing of the contents of
this file to the DB when some event such as an indication to terminate the
application happens.
All the above steps are to be performed within each thread of you
application in isolation.

You may also write some clean up code that would look for the existence of
these files when you application starts and writes the the contents to the
DB (followed by the deletion of the files), this is done to cater for
situations where your application may not have gracefully shutdown.



Allan.


Re: [GENERAL] batch insertion

2013-08-24 Thread Steve Atkins

On Aug 24, 2013, at 5:15 PM, Korisk kor...@yandex.ru wrote:

 Hi!
 I want quick insert into db a lot of data (in form of triplets). Data is 
 formed dynamical so COPY is not suitable.

COPY works just fine for dynamically generated data, and it's probably the 
right thing to use if you're bulk loading data (it's about as fast as you can 
get for a single threaded load).

Take a look at the PQputCopyData() and PQputCopyEnd() functions.

Cheers,
  Steve

 I tried batch insert like this:
 
 insert into triplets values (1,1,1);
 insert into triplets values (1,1,1), (3,2,5), (4,5,5);
 ...
 insert into triplets values (1,1,1), (3,2,5), (4,5,5)  ;
 
 The more triplets I use the quicker operation is.
 With preparation it looks like this:
 
 res = PQprepare(conn, qu, insert into triplets values ($1::bigint, 
 $2::bigint, $3::float);,3, NULL);
 ...
 res = PQprepare(conn, qu, insert into triplets values ($1::bigint, 
 $2::bigint, $3::float), ($4::bigint, $5::bigint, $6::float), ($7::bigint, 
 $8::bigint, $9::float), ($10::bigint, $11::bigint, $12::float);,12, NULL);
 ...
 
 The question:
 Is there any way to prepare query with any number of triplets without casting 
 such a long string?
 
 Thank you.
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] batch insertion

2013-08-24 Thread Adrian Klaver

On 08/24/2013 05:15 PM, Korisk wrote:

Hi!
I want quick insert into db a lot of data (in form of triplets). Data is formed dynamical 
so COPY is not suitable.
I tried batch insert like this:

insert into triplets values (1,1,1);
insert into triplets values (1,1,1), (3,2,5), (4,5,5);
...
insert into triplets values (1,1,1), (3,2,5), (4,5,5)  ;

The more triplets I use the quicker operation is.
With preparation it looks like this:

res = PQprepare(conn, qu, insert into triplets values ($1::bigint, $2::bigint, 
$3::float);,3, NULL);
...
res = PQprepare(conn, qu, insert into triplets values ($1::bigint, $2::bigint, 
$3::float), ($4::bigint, $5::bigint, $6::float), ($7::bigint, $8::bigint, $9::float), ($10::bigint, 
$11::bigint, $12::float);,12, NULL);
...

The question:
Is there any way to prepare query with any number of triplets without casting 
such a long string?,


Others have already posted about using COPY, so I will go another route. 
The increased speed you see is probably a result of more data being 
included in each transaction. From your example it is not clear if you 
are batching your INSERTs. If not that is another way go, prepare your 
statement then loop through your data in batches where a batch is 
between a BEGIN and a COMMIT. This is one of the reasons COPY is so 
fast, the data is dumped inside a single transaction. Unfortunately I do 
not use libpq so I cannot provide an example.




Thank you.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general