Dear John,
Thanks for sharing information.
>this is a logic error in your application.
Ok, I will check.
>you can look those relation numbers up in the pg_catalog to see what they are.
> you can see what the processes are in pg_stat_activity.
Currently machine is not available. Please suggest i
On 11/13/2016 11:01 PM, Yogesh Sharma wrote:
DETAIL: Process 12345 waits for AccessShareLock on relation 16446 of database
16385; blocked by process 23136.
Process 23136 waits for ShareLock on relation 16482 of database 16385; blocked
by process 12345.
you have two processes that are each wa
Dear All,
Thanks in advance.
In my Linux system, below error is found and after 10~12 hours, WAL file in
pg_xlog directory are increased continuously.
PostgreSQL:ERROR: deadlock detected
DETAIL: Process 12345 waits for AccessShareLock on relation 16446 of database
16385; blocked by process 23
Hey guys,
I'm trying to understand the performance impact of "Index Recheck", I
googled for Index Recheck, but didn't find much details about it,
where can I know more about it?
And how did you know the performance is being significantly hurt by
inadequate work_mem?
I'm running PG 9.6.1, built f
On Mon, Nov 14, 2016 at 1:22 PM, Patrick B wrote:
> Hi guys,
>
> My current scenario is:
>
> master01 - Postgres 9.2 master DB
> slave01 - Postgres 9.2 streaming replication + wal_files slave server for
> read-only queries
> slave02 - Postgres 9.2 streaming replication + wal_files slave server @
Hi guys,
My current scenario is:
master01 - Postgres 9.2 master DB
slave01 - Postgres 9.2 streaming replication + wal_files slave server for
read-only queries
slave02 - Postgres 9.2 streaming replication + wal_files slave server @ AWS
master01 sends wal_files to both slaves via ssh.
*On the ma
OK, looks like this question is answered by the wiki:
https://wiki.postgresql.org/wiki/Parallel_Query
Even when parallel query is enabled in general, the query planner will
> never generate a parallel plan if any of the following are true:
>
>- The query writes data. If a query contains a dat
Hi,
Yes, same result (non-parallel seq scan)
Cheers,
James Sewell,
PostgreSQL Team Lead / Solutions Architect
Suite 112, Jones Bay Wharf, 26-32 Pirrama Road, Pyrmont NSW 2009
*P *(+61) 2 8099 9000 <(+61)%202%208099%209000> *W* www.jirotech.com *F *
(+61) 2 8099 9099 <(+61)%202%208099%209000
Have you tried:CREATE TABLE blah AS SELECT ... ?
Sent via the Samsung Galaxy S® 6, an AT&T 4G LTE smartphone
Original message From: James Sewell
Date: 11/13/16 18:04 (GMT-05:00) To:
pgsql-general Subject: [GENERAL] Sequential
parallel scan / agg
Hi all,
I have a customer
Hi all,
I have a customer who is making use of the new parallel features on a 64
core box - this is working really well for them. However one issue (which
I've since realised I've never seen an example of) is getting the data from
these queries into a table.
I can't seem to create a plan which do
On Sat, Nov 12, 2016 at 5:33 PM, otar shavadze wrote:
> Tried
>
> OPERATOR(pg_catalog.@>)
>
>
> as Tom mentioned, but still, don't get fast performance when value does
> not existed in any array.
>
Did you build the correct index?
>
> Also "played" with many different ways, gin, gist inde
thank you.
it listens to the local network, which are my two other MacBooks. I reported
the bug to DaVinci. Hopefully the have a fix then I can put it back on md5.
thank you for your script.
will try it.
best
Robert
> On 13 Nov 2016, at 23:27, John R Pierce wrote:
>
> On 11/13/2016 1:01 PM,
On Sun, Nov 13, 2016 at 3:54 AM, Aaron Lewis
wrote:
> I have a simple table with Trigram index,
>
> create table mytable(hash char(40), title text);
> create index title_trgm_idx on mytable using gin(title gin_trgm_ops);
>
> When I run a query with 10m rows, it uses the Trigram index, but takes
>
Thank you, I understand. Nobody else can access the database.
As the database backup button in DaVinci started to work I could understand
that DaVinci is only making pg_dump of the database. The restore works fine
from any DaVinci installation to another one. I tested this. I still can make a
au
On 11/13/2016 1:01 PM, aws backup wrote:
thank you so much.
With your help I could solve all my problems.
DaVinci has a bug somewhere with the database configuration.
I installed everything new and set all auth method to trust instead of md5
which was default from DaVinci.
trust is not very se
2016-11-08 6:01 GMT+01:00 amul sul :
> On Tue, Nov 8, 2016 at 5:36 AM, Andreas Joseph Krogh
> wrote:
> >
> >
> > I don't see what you mean. It forces dump of Blobs if we didn't use -B
> and
> > if we include everything in the dump, which seems good to me. What did
> you
> > try that didn't work a
On 11/13/2016 01:01 PM, aws backup wrote:
Hi,
thank you so much.
With your help I could solve all my problems.
DaVinci has a bug somewhere with the database configuration.
I installed everything new and set all auth method to trust instead of md5
which was default from DaVinci.
Just remember
Hi,
thank you so much.
With your help I could solve all my problems.
DaVinci has a bug somewhere with the database configuration.
I installed everything new and set all auth method to trust instead of md5
which was default from DaVinci.
Now everything is working as expected. No errors anymore wit
On Sun, Nov 13, 2016 at 5:25 PM, Aaron Lewis
wrote:
> Thanks Oleg.
>
> I've increased work_mem to 128MB, now the query falls down to 1.7s,
> faster but still not good enough.
>
> Is there any other thing I can do about it?
>
your query 'x264' is short in terms of the number of trigrams, so trigr
Oleg Bartunov writes:
> On Sun, Nov 13, 2016 at 6:05 PM, Aaron Lewis
>> It takes 500ms with 10m rows, could it be faster?
> sure. Recheck with function call is pretty expensive, so I'd not recommend
> to create functional index, just create separate column of type tsvector
> (materialize to_tsv
On Sun, Nov 13, 2016 at 6:05 PM, Aaron Lewis
wrote:
> Sigh, didn't notice that. Thanks for the heads up.
>
> It takes 500ms with 10m rows, could it be faster?
>
sure. Recheck with function call is pretty expensive, so I'd not recommend
to create functional index, just create separate column of
On 11/13/2016 9:13 AM, Adrian Klaver wrote:
For example: How can I restart the SQL server? Oh sorry we can't help
you with this … ?
https://www.postgresql.org/docs/9.5/static/app-pg-ctl.html
the best way of doing this depends on the way postgres was installed,
and varies by OS/distribution
On 11/13/2016 09:04 AM, aws backup wrote:
Hi Adrian,
thank you for the explanation.
I will look into you links.
I am doing this because I want to make backups from the
database. Ideally automatic backups every night.
The Blackmagic support says they can't help me with this. The Backup and
Resto
On 11/13/2016 08:51 AM, aws backup wrote:
Hi Adrian,
I assume that the postgres database password is the one which is shown
in the DaVinci connect database window.
user: postgres
password: DaVinci
But if I work in the Terminal and change the user "sudo su - postgres" I
have to use my admin pas
On 11/13/2016 8:51 AM, aws backup wrote:
I assume that the postgres database password is the one which is shown
in the DaVinci connect database window.
user: postgres
password: DaVinci
there are no database passwords, thats the password for the postgres
database role/user.
But if I work
Hi Adrian,
thank you for the explanation.
I will look into you links.
I am doing this because I want to make backups from the database. Ideally
automatic backups every night.
The Blackmagic support says they can't help me with this. The Backup and
Restore button in DaVinci does not work.
Every
Hi Adrian,
I assume that the postgres database password is the one which is shown in the
DaVinci connect database window.
user: postgres
password: DaVinci
But if I work in the Terminal and change the user "sudo su - postgres" I have
to use my admin password.
From the DaVinci manual:
>
On 11/13/2016 05:51 AM, aws backup wrote:
Hi,
now I have another problem. Sorry I am an absolute beginner.
When I restore the dumpall backup with
"psql -f infile postgres"
I get lot of errors > "already exists" and the database is not restored
to the point of the backup.
Yes, because it seem
On 11/13/2016 05:51 AM, aws backup wrote:
Hi Adrian,
thank you for the answer.
There is one password for the postgres database and one for the postgres user.
How are you determining this?
More to the point are you talking about the application(DaVinci Resolve)
or the database itself?
Both
Sigh, didn't notice that. Thanks for the heads up.
It takes 500ms with 10m rows, could it be faster?
I've increased work_mem to 256MB
test=# explain analyze select * from mytable where
to_tsvector('english', title) @@ 'x264'::tsquery limit 1000 offset 10;
On 13/11/2016 15:26, Aaron Lewis wrote:
> Hi Oleg,
>
> Can you elaborate on the title column? I don't get it.
>
>>> create table mytable(hash char(40), title varchar(500));
>>> create index name_fts on mytable using gin(to_tsvector('english',
>>> 'title'));
You created an index on the text 'tit
Hi Oleg,
Can you elaborate on the title column? I don't get it.
On Sun, Nov 13, 2016 at 10:10 PM, Oleg Bartunov wrote:
>
>
> On Sun, Nov 13, 2016 at 2:50 PM, Aaron Lewis
> wrote:
>>
>> I have a simple table, and a gin index,
>>
>> create table mytable(hash char(40), title varchar(500));
>> crea
Thanks Oleg.
I've increased work_mem to 128MB, now the query falls down to 1.7s,
faster but still not good enough.
Is there any other thing I can do about it?
test=# explain analyze select * from mytable where title ilike 'x264';
QUER
On Sun, Nov 13, 2016 at 2:50 PM, Aaron Lewis
wrote:
> I have a simple table, and a gin index,
>
> create table mytable(hash char(40), title varchar(500));
> create index name_fts on mytable using gin(to_tsvector('english',
> 'title'));
>
^
> create unique i
On Sun, Nov 13, 2016 at 2:54 PM, Aaron Lewis
wrote:
> I have a simple table with Trigram index,
>
> create table mytable(hash char(40), title text);
> create index title_trgm_idx on mytable using gin(title gin_trgm_ops);
>
> When I run a query with 10m rows, it uses the Trigram index, but takes
>
Hi,
now I have another problem. Sorry I am an absolute beginner.
When I restore the dumpall backup with
"psql -f infile postgres"
I get lot of errors > "already exists" and the database is not restored to the
point of the backup.
I mean after I made the backup I changed something and I expecte
Hi Adrian,
thank you for the answer.
There is one password for the postgres database and one for the postgres user.
Both are not working somehow. Is there a way to look up the passwords? I saw in
the documentation that there is a .pgpass file. But I can't find it.
I changed the auth method to tr
I have a simple table with Trigram index,
create table mytable(hash char(40), title text);
create index title_trgm_idx on mytable using gin(title gin_trgm_ops);
When I run a query with 10m rows, it uses the Trigram index, but takes
3s to execute, very slow.
(I have 80m rows, but only inserted 10m
I have a simple table, and a gin index,
create table mytable(hash char(40), title varchar(500));
create index name_fts on mytable using gin(to_tsvector('english', 'title'));
create unique index md5_uniq_idx on mytable(hash);
When I execute a query with tsquery, the GIN index was not in use:
test
39 matches
Mail list logo