On Wed, Oct 28, 2015 at 10:42 PM, Tim Landscheidt
wrote:
> Hi,
>
> I regularly run into the problem that I want to query a
> PostgreSQL database in a script/program and depending on a
> boolean result do one thing or the other. A typical example
> would be a Puppet Exec that creates a user only
Hi,
I regularly run into the problem that I want to query a
PostgreSQL database in a script/program and depending on a
boolean result do one thing or the other. A typical example
would be a Puppet Exec that creates a user only if it does
not exist yet.
But unfortunately psql always returns with
On 2015-10-28 14:09, anj patnaik wrote:
>
> Also, I want to know if anyone has found any handy cron scripts for automated
> backups to run on a daily/weekly basis? i found some on google, but interested
> to know if there are better ones.
It does a lot more but:
https://github.com/wal-e/wal-e
On 10/28/2015 01:09 PM, anj patnaik wrote:
I was trying to achieve smallest file possible so tried the xz. Right
now the db size returned from SELECT
pg_size_pretty(pg_database_size('postgres') is 1.4 GB and the size of
the dump with xz is 2.2 GB.
Probably not a good idea to put your data in th
On 10/28/2015 02:20 PM, Michael Convey wrote:
Forgive my ignorance, but I'm new to PostgreSQL. Regarding installation,
I'm trying to understand some of the differences between Red Hat,
Ubuntu, and OpenSUSE. My goal is to set up a self-contained lab instance
for learning on each distribution. So,
Mike writes:
> Thanks with a bit of moving stuff about I think thats sorted it - in
> case anyone every needs it:
>SELECT
> query,
> trim(regexp_replace(
> regexp_replace(
> regexp_replace(query,'\/\*.+\*\/','','g'),
> '--[^\r\n]*', ' ', 'g')
> ,
Thanks with a bit of moving stuff about I think thats sorted it - in
case anyone every needs it:
SELECT
query,
trim(regexp_replace(
regexp_replace(
regexp_replace(query,'\/\*.+\*\/','','g'),
'--[^\r\n]*', ' ', 'g')
, '\s+', ' ', 'g')) as q
FROM publ
>
>Von: pgsql-general-ow...@postgresql.org
>[pgsql-general-ow...@postgresql.org]" im Auftrag von "Mike
>[m...@wolman.co.uk]
>Gesendet: Mittwoch, 28. Oktober 2015 20:04
>An: pgsql-general@postgresql.org
>Betreff: [GENERAL] regexp_replace to remove sql comme
On Wednesday, October 28, 2015 4:33 PM, Pavel Suderevsky
wrote:
> I am working on better insight of postgresql serialization
> mechanism. The main question is - how to determine serialization
> behaviour at system level and predict exception:
> ERROR: could not serialize access due to read/wri
Hi,
I am trying to clean up the query field returned by the
pg_stat_statements extension and remove all comments.
Some of the queries in the query field contain comments like '-- some
comment' and also '/* c style comments */'
I have managed to strip off the '--' comments and also white spa
I was trying to achieve smallest file possible so tried the xz. Right now
the db size returned from SELECT
pg_size_pretty(pg_database_size('postgres') is 1.4 GB and the size of the
dump with xz is 2.2 GB.
Is there a limit to the size of the database that pg_dump will run on? Will
it work when db i
Hi,
I am working on better insight of postgresql serialization mechanism.
The main question is - how to determine serialization behaviour at system
level and predict exception:
> ERROR: could not serialize access due to read/write dependencies among
> transactions
> DETAIL: Reason code: Cancele
Forgive my ignorance, but I'm new to PostgreSQL. Regarding installation,
I'm trying to understand some of the differences between Red Hat, Ubuntu,
and OpenSUSE. My goal is to set up a self-contained lab instance for
learning on each distribution. So, I assume I'll need both the client and
server on
Perfect explanation and indeed useful suggestions.
I'll play a bit with a development server.
Thanks,
Atenciosamente,
Edson Carlos Ericksson Richter
Em 28/10/2015 12:15, Yves Dorfsman escreveu:
On 2015-10-27 20:29, Edson Richter wrote:
Hi!
Using PostgreSQL 9.3.10 x86_64 Oracle EL7 compiled
Thanks, Tom.
I'll play a bit with this in a development server.
Regards,
Atenciosamente,
Edson Carlos Ericksson Richter
Em 28/10/2015 12:06, Tom Lane escreveu:
Adrian Klaver writes:
No, if the above does not indicate a problem, then the issue is
probably, as Francisco said, in the timezone
On Wed, Oct 28, 2015 at 8:43 AM, Tom Dearman wrote:
> We have a performance problem when our postgres is under high load. The CPU
> usage is very low, we have 48 cores for our postgres and the idle time
> averages at 90%. The problem is we get spikes in our transaction times
> which don’t appear
On Wednesday, October 28, 2015 1:52 PM, Kevin Grittner
wrote:
> But if we already have a write
> lock on the tuple (through the xmax column), then an update or
> delete of the row by another transaction would cause a write
> conflict and one of the transactions will surely be rolled back.
> An SI
On Wednesday, October 28, 2015 1:28 PM, Pavel Suderevsky
wrote:
> I would ask for clarification about logic of locks acquired by
> update statements within serializable transactions.
> [table has primary key on id column]
> testdb=# begin transaction isolation level serializable;
> BEGIN
> tes
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Tom Dearman
Sent: Wednesday, October 28, 2015 11:44 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Waiting on ExclusiveLock on extension 9.3, 9.4 and 9.5
We have a performance problem when
Hi,
I would ask for clarification about logic of locks acquired by update
statements within serializable transactions.
Tried postgres 9.3.6 and postgres 9.4.4.
*Story 1.*
testdb=# \dS+ t
> Table "public.t"
> Column | Type | Modifiers | Storage | Stats target | Descr
On 10/28/15 12:11 PM, Tom Dearman wrote:
It is also interesting that a later attempt to get the exclusive lock by
process 41911 says it is waiting for id 41907 even though according to
the log other processes have already acquired the lock.
Those would be different acquisitions of the same lock
I'm trying to install pgTAP on a FreeBSD machine and running into an odd
problem:
sed -e 's,MODULE_PATHNAME,$libdir/pgtap,g' -e 's,__OS__,freebsd,g' -e
's,__VERSION__,0.95,g' sql/pgtap-core.sql > sql/pgtap-core.tmp
/bin/sh /usr/local/lib/postgresql/pgxs/src/makefiles/../../config/missing perl
The processes involved are always client (java) processes which finish
(normally) quickly. The actual process can be any of the client processes, the
point is that an extension to the relation is going on and it seems to acquire
excessive locks. You typically see a whole series of process id c
On 10/28/2015 09:08 AM, Leonardo wrote:
Hello Adrian,
i stumbled a few weeks ago into this, but this approach relies to the
disk instead of stream. since i can't guarantee disk reliability
(clustering, disk may not be the same), i can not use this approach.
thanks for the answer, i'll keep sear
Hello Adrian,
i stumbled a few weeks ago into this, but this approach relies to the disk
instead of stream. since i can't guarantee disk reliability (clustering,
disk may not be the same), i can not use this approach.
thanks for the answer, i'll keep searching.
2015-10-27 10:37 GMT-03:00 Adrian
On Wed, Oct 28, 2015 at 12:12 AM, 657985...@qq.com <657985...@qq.com> wrote:
> Thank you for your reply.
> tshow=> explain (analyze, buffers) select count(t.*) from
> tshow.res_room_weight t,tshow.res_room_info r
> tshow-> where t.subcatlg_id=46
> tshow-> and t.roomid = r.actorid
> tshow->
On 2015-10-27 20:29, Edson Richter wrote:
> Hi!
>
> Using PostgreSQL 9.3.10 x86_64 Oracle EL7 compiled by gcc (GCC) 4.8.3
> 20140911, installed using yum repository.
>
> In postgresql.conf, I do have:
>
> timezone="America/Sao_Paulo"
>
> Since DST is in place in Brazil, it is enough to "systemc
Adrian Klaver writes:
> No, if the above does not indicate a problem, then the issue is
> probably, as Francisco said, in the timezone definitions. The thing is,
> you are on 9.3.10 which has the latest time zone data file:
Since OEL is a naked ripoff of Red Hat, I would assume that they config
On 10/27/2015 10:43 PM, Edson Richter wrote:
Adrian Klaver escreveu
> On 10/27/2015 07:29 PM, Edson Richter wrote:
> > Hi!
> >
> > Using PostgreSQL 9.3.10 x86_64 Oracle EL7 compiled by gcc (GCC) 4.8.3
> > 20140911, installed using yum repository.
> >
> > In postgresql.conf, I do
On 10/28/15 6:57 AM, mephysto wrote:
Hi there,
I need to partition a table by value of two columns: id_1 and id_2. They are
both bigint.
Anyone have experience in this? Anyone know how can I do this partitioning?
It will work just like regular table partitioning. You just need to
account for
On Wed, Oct 28, 2015 at 7:10 AM, Mark Morgan Lloyd <
markmll.pgsql-gene...@telemetry.co.uk> wrote:
> Tom Lane wrote:
>
>> Michael Convey writes:
>>
>>> Due to policies for Red Hat family distributions, the PostgreSQL
>>> installation will not be enabled for automatic start or have the database
>>
On Wed, Oct 28, 2015 at 6:43 AM, Edson Richter wrote:
> Configured since first setup. Brazil went DST few days ago.
> Today, executing
> Select now()
> Resulted in
>
> 2015-10-28 02:45:37-03:00
>
> I do expect
>
> 2015-10-28 03:45:37-02:00
>
> I suspect that Postgres is ignoring the DST change.
Hi there,
I need to partition a table by value of two columns: id_1 and id_2. They are
both bigint.
Anyone have experience in this? Anyone know how can I do this partitioning?
Thanks in advance.
Meph
--
View this message in context:
http://postgresql.nabble.com/Partitioning-table-by-multipl
Tom Lane wrote:
Michael Convey writes:
Due to policies for Red Hat family distributions, the PostgreSQL
installation will not be enabled for automatic start or have the database
initialized automatically.
To which policies are they referring? Licensing, security, or other?​
Packaging pol
I can change the sql . Needed a little business modify in my application
.but i don't know why in high load time I Executing the sql
,it cost 200-300ms . but the log record it cost 2000ms
log message:
duration: 2042.493 ms execute : select o_count from
tshow.p_hpart_liveingro
thanks , I can change the sql . Needed a little business modify in my
application .but i don't know why in high load time I Executing the sql
,it cost 200-300ms . but the log record it cost 2000ms
log message:
duration: 2042.493 ms execute : select o_count from
tshow.p_hpart_l
On 10/28/2015 12:12 AM, 657985...@qq.com wrote:
Thank you for your reply.
tshow=> explain (analyze, buffers) select count(t.*) from tshow.res_room_weight t,tshow.res_room_info r
tshow-> where t.subcatlg_id=46
tshow-> and t.roomid = r.actorid
tshow-> and r.levels>=0;
http://explain.de
Thank you for your reply.
tshow=> explain (analyze, buffers) select count(t.*) from
tshow.res_room_weight t,tshow.res_room_info r
tshow-> where t.subcatlg_id=46
tshow-> and t.roomid = r.actorid
tshow-> and r.levels>=0;
QUERY
38 matches
Mail list logo