Hi all,
My WAL archiving script based on pg_receivexlog reported the following
error several days ago (just ignore everything before
'pg_receivexlog', it's a message my script generates).
Thu Aug 15 18:33:09 MSK 2013 ERROR archive_wal.sh: Problem occured
during WAL archiving: pg_receivexlog:
On Tue, Aug 20, 2013 at 8:17 AM, Sergey Konoplev gray...@gmail.com wrote:
Hi all,
My WAL archiving script based on pg_receivexlog reported the following
error several days ago (just ignore everything before
'pg_receivexlog', it's a message my script generates).
Thu Aug 15 18:33:09 MSK 2013
Hello,
the issue can be show with this example:
piro= SET TIMEZONE = 'America/New_York';
SET
piro= select '1970-01-01T00:00:00+03:00'::timestamp;
timestamp
-
1970-01-01 00:00:00
(1 row)
piro= select
Given a system with 32 cores, an SSD SAN with 48x drives, and 2x 8Gbps
paths from the server to the SAN, what would be a good starting point
to set effective_io_concurrency? I currently have it set to 32, but I
kind of feel like the right setting would be 2 since we have two
paths. We don't
On 08/20/2013 04:00 AM, Daniele Varrazzo wrote:
Hello,
the issue can be show with this example:
piro= SET TIMEZONE = 'America/New_York';
SET
piro= select '1970-01-01T00:00:00+03:00'::timestamp;
timestamp
-
1970-01-01 00:00:00
(1
Hello
It is strange. I am trying to simulate it without success. On 1 M rows
where every id is 2 times duplicated
processing string_agg .. cca 30MB
processing array_agg cca 32MB
postgres=# create table foo(a int, b varchar);
CREATE TABLE
postgres=# insert into foo select i, md5(i::text) from
Can you send a EXPLAIN result in both use cases?
Pavel
2013/8/19 Robert Sosinski rsosin...@ticketevolution.com
At the moment, all guids are distinct, however before I zapped the
duplicates, there were 280 duplicates.
Currently, there are over 2 million distinct guids.
-Robert
On Mon,
I wrote an extension and marked one of the tables it creates as a config
table using pg_extension_config_dump(). This caused the data to be dumped
and restored correctly, but the sequence attached to the PK column was not
correctly set to its old value.
In searching for a solution I found an old
I also look at cube extension, but the built in type box - a couple of
points - does not require any extension and has a GIST index. It can be
used to represent a rectangle on the domain [-PI/2,+PI/2[*[-PI,PI[. If the
extension was providing a function get_rect_from_cap() giving the smallest
Moshe Jacobson mo...@neadwerx.com writes:
In searching for a solution I found an old message where Tom suggested
marking the sequence as a config table in the same way. This seems to work
OK when I use pg_dump, but for some reason, pg_dumpall generates the
following error:
It's pretty hard to
I want to create a user defined data type but have flexible input just like,
for example, the boolean data type where TRUE can be entered as any of (TRUE,
true, T, t, YES, yes, Y, y, 1...) and it will be interpreted as the same thing.
So suppose I have days of the week:
CREATE TYPE
As I understand, you need to search strings by similarity (using levenshtein
or any other metric distance). In that case, you can use metric indexes like
FHQT or FQA (this is better if you are using a relational database, like
postgres). But they are not implemented yet in most DBMS, so you need
On Fri, Aug 2, 2013 at 10:25 AM, Kevin Grittner kgri...@ymail.com wrote:
Janek Sendrowski jane...@web.de wrote:
I also tried pg_trgm module, which works with tri-grams, but it's
also very slow with 100.000+ rows.
Hmm. I found the pg_trgm module very fast for name searches with
millions of
On Tue, Aug 20, 2013 at 4:03 PM, Tom Lane t...@sss.pgh.pa.us wrote:
It's pretty hard to believe that that would work in pg_dump but not
pg_dumpall. You sure the error references a database that you fixed the
sequence definition in? (You'd probably have to drop and recreate the
extension to
Moshe Jacobson mo...@neadwerx.com writes:
I can pg_dump our prod database and pg_restore it onto our dev server with
zero errors, but once I try to pg_dump the dev copy again, that's when I
receive the errors above.
I didn't drop recreate the extension to fix it on prod -- I just created
a
On Tue, Aug 20, 2013 at 2:10 AM, Magnus Hagander mag...@hagander.net wrote:
It looks like something outside postgres or pg_receivexlog did
terminate the connection. pg_receievexlog noticed it was closed,
whereas postgres waited for the timeout (but probably would've noticed
if it had actually
There is one bug mentioned commit hung for days..
http://www.postgresql.org/message-id/1af3044fcab26f4db1ae551f8a33634b3d2...@mail.digital-rapids.com
The interesting thing would be to see the server logs, not the application
logs. Specifically, an issue that could look just likethis was fixed
Hi,
I am planning Streaming Replication to a new remote server, can you please
suggest how to set up data encryption by SSL or tunnelling in Postgresql?
regards
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Em 21/08/2013 01:07, ascot.m...@gmail.com escreveu:
Hi,
I am planning Streaming Replication to a new remote server, can you please
suggest how to set up data encryption by SSL or tunnelling in Postgresql?
regards
I've implemented streaming replication using OpenVPN as encrypted
tunneling
On 8/20/2013 9:07 PM, ascot.m...@gmail.com wrote:
I am planning Streaming Replication to a new remote server, can you please
suggest how to set up data encryption by SSL or tunnelling in Postgresql?
see
http://www.postgresql.org/docs/current/static/ssl-tcp.html
--
john r pierce
S H wrote:
There is one bug mentioned commit hung for days..
http://www.postgresql.org/message-id/1af3044fcab26f4db1ae551f8a33634b3d2...@mail.digital-rapids.com
The interesting thing would be to see the server logs, not the
application logs. Specifically, an issue that could look just
Hi,
why do you connect one table with the other? They are't any
relations between them. So I don't understand your point, I am somewhat
confused, you can refer to the following site:
http://www.postgresql.org/docs/9.0/static/queries-table-expressions.html
22 matches
Mail list logo