On Tue, Nov 26, 2013 at 12:30:08PM -0800, John R Pierce wrote:
> On 11/26/2013 12:16 PM, Robin wrote:
>
> 1. A self-signed certificate can be issued by anybody, there is no way of
> authenticating the issuer.
> 2. Distributing self-signed certificates becomes a pain - if signed b
I'm not really looking for information on how to speed this query
up. I'm just trying to interpret the output enough to tell me which step
is slow:
Seq Scan on mags (cost=0.00..187700750.56 rows=47476 width=4) (actual
time=3004851.889..3004851.889 rows=0 loops=1)
Filter: ((sign
On Tue, Nov 26, 2013 at 3:20 PM, Raymond O'Donnell wrote:
> On 26/11/2013 20:30, Merlin Moncure wrote:
>> There are not many ways to Hand off information outside of the
>> database while a transaction Is running. one way Is to write a Simple
>> trigger in plpgsql that 'raise'es A notice every 'n'
On 26/11/2013 20:30, Merlin Moncure wrote:
> There are not many ways to Hand off information outside of the
> database while a transaction Is running. one way Is to write a Simple
> trigger in plpgsql that 'raise'es A notice every 'n' times trigger
> condition fires. that'S Essentially the only Cl
On Tue, Nov 26, 2013 at 2:38 PM, John R Pierce wrote:
> On 11/26/2013 12:30 PM, Merlin Moncure wrote:
>>
>> There are not many ways to Hand off information outside of the
>> database while a transaction Is running. one way Is to write a Simple
>> trigger in plpgsql that 'raise'es A notice every 'n
On 11/26/2013 12:30 PM, Merlin Moncure wrote:
There are not many ways to Hand off information outside of the
database while a transaction Is running. one way Is to write a Simple
trigger in plpgsql that 'raise'es A notice every 'n' times trigger
condition fires. that'S Essentially the only Clean
On 11/26/2013 12:16 PM, Robin wrote:
1. A self-signed certificate can be issued by anybody, there is no
way of authenticating the issuer.
2. Distributing self-signed certificates becomes a pain - if signed
by a CA, its easy to lodge your public key where everybody can
find it, and
On Tue, Nov 26, 2013 at 9:28 AM, Joey Quinn wrote:
> I have a fairly large table (4.3 billion rows) that I am running an update
> script on (a bit over 127 thousand individual update queries). I am using
> the gui. It has been running for about 24 hours now. Is there any good way
> to gauge progre
> So if this problem replicated to our standby servers does that indicate
> that the potential problematic fsync occurred during a pg_xlog write?
Pretty much. You have a couple issues here, and no easy way to approach them.
Primarily, you got data corruption during a sync operation. This means ei
yeah, unlikely that it is already in the path (I certainly didn't add it
yet).
Thanks for the command (new version).
On Tue, Nov 26, 2013 at 3:13 PM, John R Pierce wrote:
> On 11/26/2013 11:45 AM, Joey Quinn wrote:
>
>> Would that command be from within the psql SQL Shell that came as part of
nope, that appears to be being blocked by the updates...
tried "select * from ipv4_table where country='gb' limit 1;"
it just sat there...
On Tue, Nov 26, 2013 at 3:00 PM, Vick Khera wrote:
>
> On Tue, Nov 26, 2013 at 2:48 PM, Joey Quinn wrote:
>
>> The ranges are indeed overlapping, though
There is a downside to self-signed certificates.
1. A self-signed certificate can be issued by anybody, there is no way
of authenticating the issuer.
2. Distributing self-signed certificates becomes a pain - if signed by
a CA, its easy to lodge your public key where everybody can find it,
On 11/26/2013 11:45 AM, Joey Quinn wrote:
Would that command be from within the psql SQL Shell that came as part
of the install? (I'm living in Windows land).
if you're already in psql, logged onto your database, it would be \i
filename.sql
psql -f filename.sql dbname...would be at the
On Tue, Nov 26, 2013 at 9:50 AM, wrote:
> Hit send on that one too soon. I see 6.5k access share locks out of 7.5k
> transactions.
>
>
> On Tue, Nov 26, 2013 at 12:41 PM, wrote:
>
>> I have a 9.2 server that occasionally becomes CPU bound. Disk wait is
>> nominal and there's no memory pressure.
On Tue, Nov 26, 2013 at 2:48 PM, Joey Quinn wrote:
> The ranges are indeed overlapping, though the update statements were
> generated alphabetically rather than in IP order... If the command line
> will let me query the table directly without being blocked by the ongoing
> updates, then I could g
On Tue, Nov 26, 2013 at 02:43:42PM -0500, David Rysdam wrote:
> I'm not really looking for information on how to speed this query
> up. I'm just trying to interpret the output enough to tell me which step
> is slow:
>
>Seq Scan on mags (cost=0.00..187700750.56 rows=47476 width=4) (actual
>
The ranges are indeed overlapping, though the update statements were
generated alphabetically rather than in IP order... If the command line
will let me query the table directly without being blocked by the ongoing
updates, then I could get a rough order of magnitude of progress by doing a
null cou
Sounds like I will have to get comfortable with the command line version of
things... sigh... hate that.
Would that command be from within the psql SQL Shell that came as part of
the install? (I'm living in Windows land).
(and thank-you for the command)
On Tue, Nov 26, 2013 at 1:24 PM, John R
I'm not really looking for information on how to speed this query
up. I'm just trying to interpret the output enough to tell me which step
is slow:
Seq Scan on mags (cost=0.00..187700750.56 rows=47476 width=4) (actual
time=3004851.889..3004851.889 rows=0 loops=1)
Filter: ((sign
On Tue, Nov 26, 2013 at 02:18:58PM -0500, Vick Khera wrote:
> Using self-signed certs you can give them longevity of 10+ years, so never
> have to worry about them again :)
Unless of course you turn out to have a weak algorithm and, say, No
Such Agency decides to take up residence on your network.
On Tue, Nov 26, 2013 at 1:31 PM, Bruce Momjian wrote:
> Well, by using a CA you are giving the CA rights to the key, while you
> fully control a self signed key. Since you probably don't expect
> unknown individuals to be connecting to your database, and self signed
> key is recommended.
>
You
> On Tue, Nov 26, 2013 at 12:41 PM, wrote:
>
> > I have a 9.2 server that occasionally becomes CPU bound. Disk wait is
> > nominal and there's no memory pressure. The workload is almost all reads
> > and the cache hit rate is high.
Maybe you'd benefit from something like the patch proposed here
On Tue, Nov 26, 2013 at 10:33:47AM -0800, John R Pierce wrote:
> On 11/26/2013 10:24 AM, Jesus Rafael Sanchez Medrano wrote:
>
> Looking to implement SSL. Every tutorial/sample I have found uses self
> signed certificates. Would there be any advantage of using a certificate
> from a CA
On Tue, Nov 26, 2013 at 12:24 PM, Joey Quinn wrote:
> When I ran that command (select * from pg_stat_activity"), it returned the
> first six lines of the scripts. I'm fairly sure it has gotten a bit beyond
> that (been running over 24 hours now, and the size has increased about 300
> GB). Am I mi
On 11/26/2013 10:24 AM, Jesus Rafael Sanchez Medrano wrote:
Looking to implement SSL. Every tutorial/sample I have found uses self
signed certificates. Would there be any advantage of using a
certificate from a CA such as digicertor rapidssl?
depends entirely on your use case.generally,
On Tue, Nov 26, 2013 at 02:24:01PM -0400, Jesus Rafael Sanchez Medrano wrote:
> Looking to implement SSL. Every tutorial/sample I have found uses self signed
> certificates. Would there be any advantage of using a certificate from a CA
> such as digicertor rapidssl?
Well, by using a CA you are giv
On 11/26/2013 9:24 AM, Joey Quinn wrote:
When I ran that command (select * from pg_stat_activity"), it returned
the first six lines of the scripts. I'm fairly sure it has gotten a
bit beyond that (been running over 24 hours now, and the size has
increased about 300 GB). Am I missing something f
Looking to implement SSL. Every tutorial/sample I have found uses self
signed certificates. Would there be any advantage of using a certificate
from a CA such as digicertor rapidssl?
Att.
==
Jesus Rafael Sanchez Medrano
"Life is a dream, of which all must wake up"
The restore of a post-crash production backup worked as hoped and the 2nd
replication slave is back into its happy hot standby state.
So if this problem replicated to our standby servers does that indicate
that the potential problematic fsync occurred during a pg_xlog write?
Would breaking replic
Hit send on that one too soon. I see 6.5k access share locks out of 7.5k
transactions.
On Tue, Nov 26, 2013 at 12:41 PM, wrote:
> I have a 9.2 server that occasionally becomes CPU bound. Disk wait is
> nominal and there's no memory pressure. The workload is almost all reads
> and the cache hit
I have a 9.2 server that occasionally becomes CPU bound. Disk wait is
nominal and there's no memory pressure. The workload is almost all reads
and the cache hit rate is high.
For some one minute periods, my monitoring shows around 1.5k to 2k access
share locks out of a total of 7.5 to 8k transacti
The ipv4 column is of type inet. It is the primary key (btree access) and
access times for queries on individual ip addresses have been around 10-15
ms.
On Tue, Nov 26, 2013 at 12:13 PM, Vick Khera wrote:
>
> On Tue, Nov 26, 2013 at 12:11 PM, Joey Quinn wrote:
>
>> update ipv4_table set count
When I ran that command (select * from pg_stat_activity"), it returned the
first six lines of the scripts. I'm fairly sure it has gotten a bit beyond
that (been running over 24 hours now, and the size has increased about 300
GB). Am I missing something for it to tell me what the last line processed
On Tue, Nov 26, 2013 at 12:11 PM, Joey Quinn wrote:
> update ipv4_table set country='xx' where ipv4 between 'xxx.xxx.xxx.xxx'
> and 'xxx.xxx.xxx.xxx';
>
> There are 127k lines like that (each with a different range and the
> appropriate country code). Each is terminated with a semi-colon. Does th
The queries themselves are written like so:
update ipv4_table set country='xx' where ipv4 between 'xxx.xxx.xxx.xxx' and
'xxx.xxx.xxx.xxx';
There are 127k lines like that (each with a different range and the
appropriate country code). Each is terminated with a semi-colon. Does that
make them indiv
Connect to the DB and run "select * from pg_stat_activity" to see what
specific query your other connection is running. Then find that in your
file to see how far it has progressed.
I hope you profiled your queries to make sure they run fast before you
started. :)
On Tue, Nov 26, 2013 at 10:28
Now it's too late,
but maybe you could allow to not use a single transaction ( but instead
127k transactions).4
Then at the end of every transaction you could print something in gui
(print for pgscript, raise for plpgsql) or execute a command to write in a
file (copy for instance).
It would also b
I have a fairly large table (4.3 billion rows) that I am running an update
script on (a bit over 127 thousand individual update queries). I am using
the gui. It has been running for about 24 hours now. Is there any good way
to gauge progress (as in, how many of the individual update queries have
fi
On Tue, Nov 26, 2013 at 02:48:34PM +, Albe Laurenz wrote:
> I beg your pardon, but Windows-1252 has nothing to do with Unicode
Sorry, you're quite right, I'm having a brain fade (I meant ISO
8859-1, of course).
The point I wanted to make, however, is that the collation often
causes trouble wi
Andrew Sullivan wrote:
> Guess guessing, but I bet the collation is what hurts, [...]
> (The background for my guess: on your Linux box UTF-8 is likely the
> normal local encoding, but on Windows that isn't true, and 1252 is
> _almost_ but not quite Unicode. This bites people generally in
> inter
Chris Curvey wrote:
> My vendor took a dump of our "something else" database (which runs on
> Windows), did their conversion
> to Postgres, and then sent me back a postgres dump (custom format) of the
> database for me to load onto
> my servers for testing.
>
>
> I was interested to find that w
On Tue, Nov 26, 2013 at 09:25:17AM -0500, Chris Curvey wrote:
>
> CREATE DATABASE "TestDatabase" WITH TEMPLATE = template0 ENCODING = 'UTF8'
> LC_COLLATE = 'English_United States.1252' LC_CTYPE = 'English_United
> States.1252';
Guess guessing, but I bet the collation is what hurts, just because
t
I'm working with a vendor who is in the process of converting their system
from "something else" to Postgres. Yay!
My vendor took a dump of our "something else" database (which runs on
Windows), did their conversion to Postgres, and then sent me back a
postgres dump (custom format) of the databas
This is how I solved it:
SELECT
xmlagg(
xmlconcat(
xmlelement(name test_element,
xmlforest(
ff.d AS a
)--xmlforest
)
)--xmlconcat
)--xmlagg
FROM (
SELECT
--xmlagg(
xmlconcat(
xmlelement(name test_element,
xmlattributes(
0 AS m
),
xmlforest(
dh.id AS i
,dh.some_value AS sv
)--xmlforest
)--test_e
> To: pgsql-general@postgresql.org
> Date: Tue, 26 Nov 2013 08:15:45 +0100
>
> Nelson Green, 25.11.2013 23:01:
> > Hello,
> > When inserting a record into the jobs table that references projects by
> > name, do I have to query the projects table twice,
> > once to get the funding source number,
Peter Kroon wrote:
> Is anyone able to reproduce?
> When I run the query below all 5 rows are returned instead of 2.
> Or is this the default behaviour..
> SELECT
> xmlagg(
[...]
> )--xmlagg
> FROM __pg_test_table AS dh
> WHERE dh.__rel=5 LIMIT 2 --OFFSET 10;
According to the documentation, that
Rémi Cura wrote:
>>> somebody knows of a way to autodocument plpgsql function, in a docxygen
>>> style
>>> (adding tags in comments for instance, or creating doc templates to fill).
>>>
>>> It would really help to write the doc and maintain it.
> Typically in you comments you include special tags
Is anyone able to reproduce?
When I run the query below all 5 rows are returned instead of 2.
Or is this the default behaviour..
"PostgreSQL 9.2.4 on i686-pc-linux-gnu, compiled by gcc (GCC) 4.1.2
20080704 (Red Hat 4.1.2-52), 32-bit"
DROP TABLE IF EXISTS __pg_test_table CASCADE;
CREATE TABLE __pg
Hey thanks for the answer.
I'm thinking way more than that.
Typically in you comments you include special tags, like @input, then
doxygen will parse it and generate an html documentation.
Cheers,
Rémi-C
2013/11/26 Albe Laurenz
> Rémi Cura wrote:
> > somebody knows of a way to autodocument pl
I suppose the PostgreSQL Publice Relations people might be
interested in this misuse of PostgreSQL phrase:
http://www.computerweekly.com/blogs/open-source-insider/2013/11/why-elephants-never-forget-big-data.html
Karsten
--
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD
Rémi Cura wrote:
> somebody knows of a way to autodocument plpgsql function, in a docxygen style
> (adding tags in comments for instance, or creating doc templates to fill).
>
> It would really help to write the doc and maintain it.
I am not sure what you need, but I see two ways to
document a fu
On 26/11/13 07:15, Arun P.L wrote:
Hi all,
We had a wraparound failure in the db and most of the tables and data
were missing. So we have done a full vacuum in db and after that the
tables reappeared but now the problem is, all the tables have duplicate
when listing tables with /dt. And also aft
Hey all,
somebody knows of a way to autodocument plpgsql function, in a docxygen
style
(adding tags in comments for instance, or creating doc templates to fill).
It would really help to write the doc and maintain it.
Thanks,
Rémi-C
53 matches
Mail list logo