Re: [GENERAL] Beta testers for database development tool wanted

2016-05-11 Thread Martijn Tonies (Upscene Productions)

Hello Steve,


I’ll just get at it right away --

We’re developing a database development tool called Database Workbench, 
it currently supports MySQL, InterBase, Firebird, Oracle, SQL Server, 
 >NexusDB and SQL Anywhere (see 
http://www.upscene.com/database_workbench/ )


Windows only, judging from the screenshots?


Native Windows, but runs fine in Wine (Platinum status, says the AppDB of 
WhineHQ)

https://appdb.winehq.org/objectManager.php?sClass=version&iId=31080


With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Beta testers for database development tool wanted

2016-05-11 Thread Steve Atkins

> On May 11, 2016, at 11:24 PM, Martijn Tonies (Upscene Productions) 
>  wrote:
> 
> Hello everyone,
>  
> I’ll just get at it right away --
>  
> We’re developing a database development tool called Database Workbench, it 
> currently supports MySQL, InterBase, Firebird, Oracle, SQL Server, NexusDB 
> and SQL Anywhere (see http://www.upscene.com/database_workbench/ )

Windows only, judging from the screenshots?

Cheers,
  Steve

>  
> We’re adding PostgreSQL support and the first beta is ready for testing
>  
> We would like to have people who:
> - would use this product on a daily basis, like they're using any other 
> PostgreSQL tool (eg PgAdmin) now
> - work with larger databases, both data volume and meta data object count
> - are able to report bugs in a (reasonable) detailed manner
> - are able to discuss new features or enhancements
> - are able to regularly download updates and use them
> - don’t mind being put on a private e-mail list to report issues
> 
> Limitations:
> - stored function overloading currently not supported
> - version 9.1 and up supported
>  
>  
> If anyone of you is interested is testing this tool, with a free license for 
> the PostgreSQL module and a second module of your choice, drop me an e-mail 
> at m.tonies @ upscene.com
>  
>  
> With regards,
> 
> Martijn Tonies
> Upscene Productions
> http://www.upscene.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Beta testers for database development tool wanted

2016-05-11 Thread Martijn Tonies (Upscene Productions)
Hello everyone,

I’ll just get at it right away --

We’re developing a database development tool called Database Workbench, it 
currently supports MySQL, InterBase, Firebird, Oracle, SQL Server, NexusDB and 
SQL Anywhere (see http://www.upscene.com/database_workbench/ )

We’re adding PostgreSQL support and the first beta is ready for testing. 

We would like to have people who:
- would use this product on a daily basis, like they're using any other 
PostgreSQL tool (eg PgAdmin) now
- work with larger databases, both data volume and meta data object count
- are able to report bugs in a (reasonable) detailed manner
- are able to discuss new features or enhancements
- are able to regularly download updates and use them
- don’t mind being put on a private e-mail list to report issues

Limitations:
- stored function overloading currently not supported
- version 9.1 and up supported


If anyone of you is interested is testing this tool, with a free license for 
the PostgreSQL module and a second module of your choice, drop me an e-mail at 
m.tonies @ upscene.com


With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-11 Thread Jeff Janes
On Wed, May 11, 2016 at 5:30 PM, Lucas Possamai  wrote:
> Hi there!
>
> I've got a simple but slow query:
>
>>  SELECT DISTINCT title
>> FROM ja_jobs WHERE title ILIKE '%RYAN WER%'
>> and clientid = 31239  AND time_job > 1457826264
>> order BY title
>> limit 10


>
> CREATE INDEX ix_ja_jobs_trgm_gin ON public.ja_jobs USING gin (title
> gin_trgm_ops);
>
> Explain analyze after the index: (Yes, I ran the analyze)
>
>> Limit  (cost=389.91..389.91 rows=1 width=20) (actual
>> time=3720.511..3720.511 rows=0 loops=1)
>>   ->  Unique  (cost=389.91..389.91 rows=1 width=20) (actual
>> time=3720.507..3720.507 rows=0 loops=1)
>> ->  Sort  (cost=389.91..389.91 rows=1 width=20) (actual
>> time=3720.505..3720.505 rows=0 loops=1)
>>   Sort Key: "title"
>>   Sort Method: quicksort  Memory: 25kB
>>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=385.88..389.90
>> rows=1 width=20) (actual time=3720.497..3720.497 rows=0 loops=1)
>> Recheck Cond: (("clientid" = 31239) AND ("time_job" >
>> 1457826264) AND (("title")::"text" ~~ '%RYAN WER%'::"text"))
>> Rows Removed by Index Recheck: 4
>> ->  BitmapAnd  (cost=385.88..385.88 rows=1 width=0)
>> (actual time=3720.469..3720.469 rows=0 loops=1)
>>   ->  Bitmap Index Scan on "ix_jobs_client_times"
>> (cost=0.00..50.00 rows=1644 width=0) (actual time=0.142..0.142 rows=795
>> loops=1)
>> Index Cond: (("clientid" = 31239) AND
>> ("time_job" > 1457826264))
>>   ->  Bitmap Index Scan on "ix_ja_jobs_trgm_gin"
>> (cost=0.00..335.63 rows=484 width=0) (actual time=3720.213..3720.213 rows=32
>> loops=1)
>> Index Cond: (("title")::"text" ~~ '%RYAN
>> WER%'::"text")
>> Total runtime: 3720.653 ms


How big is the table?  The gin index?  shared_buffers?  RAM?  What
kind of IO system do you have, and how many other things were going on
with it?

It would be interesting to see the output of explain (analyze,
buffers) with track_io_timing turned on.

There have been improvements in this area since 9.2, you should
consider upgrading to at least 9.4.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-11 Thread Jan de Visser
On Wed, May 11, 2016 at 10:03 PM, Lucas Possamai 
wrote:

>
>>>
>> Trying redoing the query with CTE as below:
>>
>> WITH ja_jobs as
>>   (SELECT DISTINCT title
>>  FROM ja_jobs
>> WHERE clientid = 31239  AND time_job > 1457826264
>>   )
>> SELECT title
>>   FROM ja_jobs
>>  WHERE title ILIKE 'RYAN WER%'
>>  ORDER BY title
>>  LIMIT 10;
>>
>
> hmm.. still slow =(
>
>
> and it's not hitting the index: (i had to change the clientid because the
> previous one was in cache)
>
> Limit  (cost=93790.08..93790.09 rows=1 width=218) (actual
>> time=284.293..284.308 rows=5 loops=1)
>>   Buffers: shared hit=42284
>>   CTE ja_jobs
>> ->  HashAggregate  (cost=93774.31..93779.16 rows=485 width=20)
>> (actual time=207.235..228.141 rows=16320 loops=1)
>>   Buffers: shared hit=42284
>>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=882.98..93697.86
>> rows=30578 width=20) (actual time=21.942..133.380 rows=48472 loops=1)
>> Recheck Cond: (("clientid" = 14635) AND ("time_job" >
>> 1436731799))
>> Buffers: shared hit=42284
>> ->  Bitmap Index Scan on "ix_jobs_client_times"
>>  (cost=0.00..875.34 rows=30578 width=0) (actual time=12.389..12.389
>> rows=48472 loops=1)
>>   Index Cond: (("clientid" = 14635) AND ("time_job" >
>> 1436731799))
>>   Buffers: shared hit=243
>>   ->  Sort  (cost=10.92..10.93 rows=1 width=218) (actual
>> time=284.289..284.293 rows=5 loops=1)
>> Sort Key: "ja_jobs"."title"
>> Sort Method: quicksort  Memory: 25kB
>> Buffers: shared hit=42284
>> ->  CTE Scan on "ja_jobs"  (cost=0.00..10.91 rows=1 width=218)
>> (actual time=236.248..284.263 rows=5 loops=1)
>>   Filter: (("title")::"text" ~~* '%To Electrical%'::"text")
>>   Rows Removed by Filter: 16315
>>   Buffers: shared hit=42284
>> Total runtime: 287.633 ms
>
>
>

I think a GIN index can't be used for sorting.


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-11 Thread Lucas Possamai
>
>
>>
> Trying redoing the query with CTE as below:
>
> WITH ja_jobs as
>   (SELECT DISTINCT title
>  FROM ja_jobs
> WHERE clientid = 31239  AND time_job > 1457826264
>   )
> SELECT title
>   FROM ja_jobs
>  WHERE title ILIKE 'RYAN WER%'
>  ORDER BY title
>  LIMIT 10;
>

hmm.. still slow =(


and it's not hitting the index: (i had to change the clientid because the
previous one was in cache)

Limit  (cost=93790.08..93790.09 rows=1 width=218) (actual
> time=284.293..284.308 rows=5 loops=1)
>   Buffers: shared hit=42284
>   CTE ja_jobs
> ->  HashAggregate  (cost=93774.31..93779.16 rows=485 width=20) (actual
> time=207.235..228.141 rows=16320 loops=1)
>   Buffers: shared hit=42284
>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=882.98..93697.86
> rows=30578 width=20) (actual time=21.942..133.380 rows=48472 loops=1)
> Recheck Cond: (("clientid" = 14635) AND ("time_job" >
> 1436731799))
> Buffers: shared hit=42284
> ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..875.34 rows=30578 width=0) (actual time=12.389..12.389
> rows=48472 loops=1)
>   Index Cond: (("clientid" = 14635) AND ("time_job" >
> 1436731799))
>   Buffers: shared hit=243
>   ->  Sort  (cost=10.92..10.93 rows=1 width=218) (actual
> time=284.289..284.293 rows=5 loops=1)
> Sort Key: "ja_jobs"."title"
> Sort Method: quicksort  Memory: 25kB
> Buffers: shared hit=42284
> ->  CTE Scan on "ja_jobs"  (cost=0.00..10.91 rows=1 width=218)
> (actual time=236.248..284.263 rows=5 loops=1)
>   Filter: (("title")::"text" ~~* '%To Electrical%'::"text")
>   Rows Removed by Filter: 16315
>   Buffers: shared hit=42284
> Total runtime: 287.633 ms


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-11 Thread Melvin Davidson
On Wed, May 11, 2016 at 9:36 PM, Lucas Possamai 
wrote:

>
>>>
>> The main problem is WHERE title ILIKE '%RYAN WER%'
>> When you put a % on the left of the text, there is no way to optimize
>> that, so yes, it will be slow.
>>
>> If you can eliminate the leading percent and just have trailing, it will
>> be much faster.
>>
>>
>
> Hmm.. yep.. I suppose I can do that.
>
> But, taking the left % off, the query is still slow:
>
> Limit  (cost=418.57..418.58 rows=1 width=20) (actual
>> time=4439.367..4439.381 rows=1 loops=1)
>>   Buffers: shared hit=6847
>>   ->  Unique  (cost=418.57..418.58 rows=1 width=20) (actual
>> time=4439.363..4439.374 rows=1 loops=1)
>> Buffers: shared hit=6847
>> ->  Sort  (cost=418.57..418.58 rows=1 width=20) (actual
>> time=4439.360..4439.365 rows=4 loops=1)
>>   Sort Key: "title"
>>   Sort Method: quicksort  Memory: 25kB
>>   Buffers: shared hit=6847
>>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=414.55..418.56
>> rows=1 width=20) (actual time=4439.312..4439.329 rows=4 loops=1)
>> Recheck Cond: (("clientid" = 31239) AND ("time_job" >
>> 1457826264) AND (("title")::"text" ~~* 'RYAN SHOWER%'::"text"))
>> Buffers: shared hit=6847
>> ->  BitmapAnd  (cost=414.55..414.55 rows=1 width=0)
>> (actual time=4439.280..4439.280 rows=0 loops=1)
>>   Buffers: shared hit=6843
>>   ->  Bitmap Index Scan on "ix_jobs_client_times"
>>  (cost=0.00..50.67 rows=1711 width=0) (actual time=0.142..0.142 rows=795
>> loops=1)
>> Index Cond: (("clientid" = 31239) AND
>> ("time_job" > 1457826264))
>> Buffers: shared hit=8
>>   ->  Bitmap Index Scan on "ix_ja_jobs_trgm_gin"
>>  (cost=0.00..363.62 rows=483 width=0) (actual time=4439.014..4439.014
>> rows=32 loops=1)
>> Index Cond: (("title")::"text" ~~* 'RYAN
>> SHOWER%'::"text")
>> Buffers: shared hit=6835
>> Total runtime: 4439.427 ms
>
>
> Here [1] it appears to be working even with two %.. But it's not for
> me
>
> [1] https://www.depesz.com/2011/02/19/waiting-for-9-1-faster-likeilike/
>
>
> Any ideia? lol
>

Trying redoing the query with CTE as below:

WITH ja_jobs as
  (SELECT DISTINCT title
 FROM ja_jobs
WHERE clientid = 31239  AND time_job > 1457826264
  )
SELECT title
  FROM ja_jobs
 WHERE title ILIKE 'RYAN WER%'
 ORDER BY title
 LIMIT 10;


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-11 Thread Lucas Possamai
>
>
>>
> The main problem is WHERE title ILIKE '%RYAN WER%'
> When you put a % on the left of the text, there is no way to optimize
> that, so yes, it will be slow.
>
> If you can eliminate the leading percent and just have trailing, it will
> be much faster.
>
>

Hmm.. yep.. I suppose I can do that.

But, taking the left % off, the query is still slow:

Limit  (cost=418.57..418.58 rows=1 width=20) (actual
> time=4439.367..4439.381 rows=1 loops=1)
>   Buffers: shared hit=6847
>   ->  Unique  (cost=418.57..418.58 rows=1 width=20) (actual
> time=4439.363..4439.374 rows=1 loops=1)
> Buffers: shared hit=6847
> ->  Sort  (cost=418.57..418.58 rows=1 width=20) (actual
> time=4439.360..4439.365 rows=4 loops=1)
>   Sort Key: "title"
>   Sort Method: quicksort  Memory: 25kB
>   Buffers: shared hit=6847
>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=414.55..418.56
> rows=1 width=20) (actual time=4439.312..4439.329 rows=4 loops=1)
> Recheck Cond: (("clientid" = 31239) AND ("time_job" >
> 1457826264) AND (("title")::"text" ~~* 'RYAN SHOWER%'::"text"))
> Buffers: shared hit=6847
> ->  BitmapAnd  (cost=414.55..414.55 rows=1 width=0)
> (actual time=4439.280..4439.280 rows=0 loops=1)
>   Buffers: shared hit=6843
>   ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..50.67 rows=1711 width=0) (actual time=0.142..0.142 rows=795
> loops=1)
> Index Cond: (("clientid" = 31239) AND
> ("time_job" > 1457826264))
> Buffers: shared hit=8
>   ->  Bitmap Index Scan on "ix_ja_jobs_trgm_gin"
>  (cost=0.00..363.62 rows=483 width=0) (actual time=4439.014..4439.014
> rows=32 loops=1)
> Index Cond: (("title")::"text" ~~* 'RYAN
> SHOWER%'::"text")
> Buffers: shared hit=6835
> Total runtime: 4439.427 ms


Here [1] it appears to be working even with two %.. But it's not for me

[1] https://www.depesz.com/2011/02/19/waiting-for-9-1-faster-likeilike/


Any ideia? lol


Re: [GENERAL] Scaling Database for heavy load

2016-05-11 Thread Melvin Davidson
On Wed, May 11, 2016 at 8:52 PM, Scott Marlowe 
wrote:

> On Wed, May 11, 2016 at 4:09 AM, Digit Penguin 
> wrote:
> > Hello,
> >
> >
> > we use PostgreSql 9.x in conjunction with BIND/DNS for some Companies
> with
> > about 1.000 queries per second.
> > Now we have to scale the system up to 100.000 queries per second (about).
> >
> > Bind/DNS is very light and i think can not give us bottleneck.
> > The question is how to dimension the backend database.
> >
> > The queries are select (only few insert or update), but the 100.000
> queries
> > per second are only select.
> >
> > How can i calculate/dimensionate?
> > We think to put mor ethan one Bind Server (with backend database)
> behinbd a
> > router with balancing capabilities.
> >
> > The problem is to know which requirements and limits does a Postgresql
> 9.x
> > installation - 64 bit - can have.
> > Furthermore, we tried Rubyrep (it is quite old!); can you suggest me
> other
> > replication modules that can work also if connction link, from Database
> > Server, went down?
>
> Definitely looks like multiple read slaves is the answer. How man
> depends on a few things.
>
> How big is your data set? How many clients need to have an open
> connection at a time? How man updates / inserts / second are we
> talking equals "a few"? One per second? Ten, a hundred, a thousand?
>
> How often and for how long will your connection link be going down?
> Slony is quite robust. Postgresql's built in streaming replication
> works well enough if you use something liek WALE or OmniPITR to
> archive xlogs and make them available in case of loss of connection.
>
> --
> To understand recursion, one must first understand recursion.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

You might also want to consider using pgbouncer to help balance the
connections/queries among the slaves.

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-11 Thread Melvin Davidson
On Wed, May 11, 2016 at 8:30 PM, Lucas Possamai 
wrote:

> Hi there!
>
> I've got a simple but slow query:
>
>  SELECT DISTINCT title
>> FROM ja_jobs WHERE title ILIKE '%RYAN WER%'
>> and clientid = 31239  AND time_job > 1457826264
>> order BY title
>> limit 10
>
>
> Explain analyze:
>
> Limit  (cost=5946.40..5946.41 rows=1 width=19) (actual
>> time=2746.759..2746.772 rows=1 loops=1)
>>   ->  Unique  (cost=5946.40..5946.41 rows=1 width=19) (actual
>> time=2746.753..2746.763 rows=1 loops=1)
>> ->  Sort  (cost=5946.40..5946.41 rows=1 width=19) (actual
>> time=2746.750..2746.754 rows=4 loops=1)
>>   Sort Key: "title"
>>   Sort Method: quicksort  Memory: 25kB
>>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=49.02..5946.39
>> rows=1 width=19) (actual time=576.275..2746.609 rows=4 loops=1)
>> Recheck Cond: (("clientid" = 31239) AND ("time_job" >
>> 1457826264))
>> Filter: (("title")::"text" ~~* '%RYAN WER%'::"text")
>> Rows Removed by Filter: 791
>> ->  Bitmap Index Scan on "ix_jobs_client_times"
>>  (cost=0.00..49.02 rows=1546 width=0) (actual time=100.870..100.870
>> rows=795 loops=1)
>>   Index Cond: (("clientid" = 31239) AND
>> ("time_job" > 1457826264))
>> Total runtime: 2746.879 ms
>
>
> Then, I created a trgm index:
>
> CREATE INDEX ix_ja_jobs_trgm_gin ON public.ja_jobs USING gin (title
> gin_trgm_ops);
>
> Explain analyze after the index: (Yes, I ran the analyze)
>
> Limit  (cost=389.91..389.91 rows=1 width=20) (actual
>> time=3720.511..3720.511 rows=0 loops=1)
>>   ->  Unique  (cost=389.91..389.91 rows=1 width=20) (actual
>> time=3720.507..3720.507 rows=0 loops=1)
>> ->  Sort  (cost=389.91..389.91 rows=1 width=20) (actual
>> time=3720.505..3720.505 rows=0 loops=1)
>>   Sort Key: "title"
>>   Sort Method: quicksort  Memory: 25kB
>>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=385.88..389.90
>> rows=1 width=20) (actual time=3720.497..3720.497 rows=0 loops=1)
>> Recheck Cond: (("clientid" = 31239) AND ("time_job" >
>> 1457826264) AND (("title")::"text" ~~ '%RYAN WER%'::"text"))
>> Rows Removed by Index Recheck: 4
>> ->  BitmapAnd  (cost=385.88..385.88 rows=1 width=0)
>> (actual time=3720.469..3720.469 rows=0 loops=1)
>>   ->  Bitmap Index Scan on "ix_jobs_client_times"
>>  (cost=0.00..50.00 rows=1644 width=0) (actual time=0.142..0.142 rows=795
>> loops=1)
>> Index Cond: (("clientid" = 31239) AND
>> ("time_job" > 1457826264))
>>   ->  Bitmap Index Scan on "ix_ja_jobs_trgm_gin"
>>  (cost=0.00..335.63 rows=484 width=0) (actual time=3720.213..3720.213
>> rows=32 loops=1)
>> Index Cond: (("title")::"text" ~~ '%RYAN
>> WER%'::"text")
>> Total runtime: 3720.653 ms
>
>
>
> so.. the query is still slow..
> Do you guys  know what can be done ? related to the ILIKE?
>
> cheers
> Lucas
>

The main problem is WHERE title ILIKE '%RYAN WER%'
When you put a % on the left of the text, there is no way to optimize that,
so yes, it will be slow.

If you can eliminate the leading percent and just have trailing, it will be
much faster.


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: [GENERAL] Scaling Database for heavy load

2016-05-11 Thread Scott Marlowe
On Wed, May 11, 2016 at 4:09 AM, Digit Penguin  wrote:
> Hello,
>
>
> we use PostgreSql 9.x in conjunction with BIND/DNS for some Companies with
> about 1.000 queries per second.
> Now we have to scale the system up to 100.000 queries per second (about).
>
> Bind/DNS is very light and i think can not give us bottleneck.
> The question is how to dimension the backend database.
>
> The queries are select (only few insert or update), but the 100.000 queries
> per second are only select.
>
> How can i calculate/dimensionate?
> We think to put mor ethan one Bind Server (with backend database) behinbd a
> router with balancing capabilities.
>
> The problem is to know which requirements and limits does a Postgresql 9.x
> installation - 64 bit - can have.
> Furthermore, we tried Rubyrep (it is quite old!); can you suggest me other
> replication modules that can work also if connction link, from Database
> Server, went down?

Definitely looks like multiple read slaves is the answer. How man
depends on a few things.

How big is your data set? How many clients need to have an open
connection at a time? How man updates / inserts / second are we
talking equals "a few"? One per second? Ten, a hundred, a thousand?

How often and for how long will your connection link be going down?
Slony is quite robust. Postgresql's built in streaming replication
works well enough if you use something liek WALE or OmniPITR to
archive xlogs and make them available in case of loss of connection.

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-11 Thread Lucas Possamai
Hi there!

I've got a simple but slow query:

 SELECT DISTINCT title
> FROM ja_jobs WHERE title ILIKE '%RYAN WER%'
> and clientid = 31239  AND time_job > 1457826264
> order BY title
> limit 10


Explain analyze:

Limit  (cost=5946.40..5946.41 rows=1 width=19) (actual
> time=2746.759..2746.772 rows=1 loops=1)
>   ->  Unique  (cost=5946.40..5946.41 rows=1 width=19) (actual
> time=2746.753..2746.763 rows=1 loops=1)
> ->  Sort  (cost=5946.40..5946.41 rows=1 width=19) (actual
> time=2746.750..2746.754 rows=4 loops=1)
>   Sort Key: "title"
>   Sort Method: quicksort  Memory: 25kB
>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=49.02..5946.39
> rows=1 width=19) (actual time=576.275..2746.609 rows=4 loops=1)
> Recheck Cond: (("clientid" = 31239) AND ("time_job" >
> 1457826264))
> Filter: (("title")::"text" ~~* '%RYAN WER%'::"text")
> Rows Removed by Filter: 791
> ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..49.02 rows=1546 width=0) (actual time=100.870..100.870
> rows=795 loops=1)
>   Index Cond: (("clientid" = 31239) AND
> ("time_job" > 1457826264))
> Total runtime: 2746.879 ms


Then, I created a trgm index:

CREATE INDEX ix_ja_jobs_trgm_gin ON public.ja_jobs USING gin (title
gin_trgm_ops);

Explain analyze after the index: (Yes, I ran the analyze)

Limit  (cost=389.91..389.91 rows=1 width=20) (actual
> time=3720.511..3720.511 rows=0 loops=1)
>   ->  Unique  (cost=389.91..389.91 rows=1 width=20) (actual
> time=3720.507..3720.507 rows=0 loops=1)
> ->  Sort  (cost=389.91..389.91 rows=1 width=20) (actual
> time=3720.505..3720.505 rows=0 loops=1)
>   Sort Key: "title"
>   Sort Method: quicksort  Memory: 25kB
>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=385.88..389.90
> rows=1 width=20) (actual time=3720.497..3720.497 rows=0 loops=1)
> Recheck Cond: (("clientid" = 31239) AND ("time_job" >
> 1457826264) AND (("title")::"text" ~~ '%RYAN WER%'::"text"))
> Rows Removed by Index Recheck: 4
> ->  BitmapAnd  (cost=385.88..385.88 rows=1 width=0)
> (actual time=3720.469..3720.469 rows=0 loops=1)
>   ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..50.00 rows=1644 width=0) (actual time=0.142..0.142 rows=795
> loops=1)
> Index Cond: (("clientid" = 31239) AND
> ("time_job" > 1457826264))
>   ->  Bitmap Index Scan on "ix_ja_jobs_trgm_gin"
>  (cost=0.00..335.63 rows=484 width=0) (actual time=3720.213..3720.213
> rows=32 loops=1)
> Index Cond: (("title")::"text" ~~ '%RYAN
> WER%'::"text")
> Total runtime: 3720.653 ms



so.. the query is still slow..
Do you guys  know what can be done ? related to the ILIKE?

cheers
Lucas


[GENERAL] Meetup in Boston city?

2016-05-11 Thread Josh berkus
Boston folks:

I'll be in Boston on the 24th/25th for ContinerDays.  It would be nice
ot meet up with other Postgres folks there ... except that it looks like
the Boston PUG is located in Bedford?

Is there a meetup in Boston or Cambridge?

-- 
--
Josh Berkus
Red Hat OSAS
(any opinions are my own)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using both ident and password in pg_hba.conf

2016-05-11 Thread Bruno Wolff III

On Mon, May 09, 2016 at 22:43:53 -0400,
 "D'Arcy J.M. Cain"  wrote:


Of course PHP scripts have to run as nobody so I have no choice other
than to have them store passwords in various config.php files but PHP
users are used to that.  I would like to fix that but that's a war for
another day.


You can use peer authentication if the php scripts run on the same machine 
as the database, though you'd probably want to use a different local user 
than 'nobody' to run under.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Invalid data read from synchronously replicated hot standby

2016-05-11 Thread Kevin Grittner
On Wed, May 11, 2016 at 5:44 AM,
 wrote:

>> We are getting invalid data when reading from a synchronously
>> replicated hot standby node in a 2-node setup. To better understand
>> the situation, we have created a document that provides an overview.
>> We are hoping that someone might be able to confirm whether or not
>> the setup makes sense, i.e., whether we are using PostgreSQL
>> correctly and experiencing a bug, or if we are using PostgreSQL
>> incorrectly.
>>
>> Link to document that contains a step-by-step description of the
>> situation:
>> https://docs.google.com/document/d/1MuX8rq1gKw_WZ-HVflqxFslvXNTRGKa77A4NHto4ue0/edit?usp=sharing

Please include such information in your post or as an attachment.
Who knows whether that link will still be usable and unchanged 20
years from now?

>> If the setup is sane (and expected to work),

I didn't see anywhere that you correctly handled WAL in setting up
your standby.  I am not surprised by there being corruption,
including duplicate keys in a unique index.  You might try -x or -X
when you run pg_basebackup, or use archiving.  Whatever you do, do
NOT delete the backup_label file!

> In the mean time, we are preparing a new platform on 9.5.2 where
> I have not been able to reproduce the issue (however, we have
> introduced a lot of changes besides upgrading PostgreSQL).

We would need a lot more detail to be able to even guess at whether
you have actually solved the flaws in your process or have just
been lucky so far.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lightest way of checking if postgresql is running at the other end of an ssh tunnel?

2016-05-11 Thread Karsten Hilbert
On Wed, May 11, 2016 at 02:28:47PM +0200, Vik Fearing wrote:

> >> We have an ssh connection running from one server to our
> >> postgresql database on another server. Some times we
> >> experience that the ssh tunnel does not work anymore and
> >> needs to be restarted, even though we use the autossh
> >> package. I would like to write a script that “pings”
> >> postgresql on the specified port, to check if the connection
> >> goes through. I have tried with netcat, but it does not
> >> really check if postgresql is in the other end of the tunnel,
> >> it only check if there is as service (the tunnel) listing on
> >> the port on the local machine. Is there another way of
> >> pinging the port, to see if postgresql is alive at the other
> >> end? If possible, I would like to NOT actually establishing a
> >> connection to postgresql like if i used psql -c “select 1;”,
> >> to avoid connection overhead.
> > 
> > This
> > 
> > http://www.postgresql.org/docs/devel/static/libpq-connect.html
> > 
> > talks about ping functionality. Maybe you can use a tiny
> > custom piece of code ?
> 
> That tiny custom piece of code would be this:
> 
> http://www.postgresql.org/docs/current/static/app-pg-isready.html

That's what I had in mind :-)

Thanks,
Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Lightest way of checking if postgresql is running at the other end of an ssh tunnel?

2016-05-11 Thread Vik Fearing
On 05/11/2016 11:41 AM, Karsten Hilbert wrote:
> On Wed, May 11, 2016 at 11:17:54AM +0200, Niels Kristian Schjødt wrote:
> 
>> We have an ssh connection running from one server to our
>> postgresql database on another server. Some times we
>> experience that the ssh tunnel does not work anymore and
>> needs to be restarted, even though we use the autossh
>> package. I would like to write a script that “pings”
>> postgresql on the specified port, to check if the connection
>> goes through. I have tried with netcat, but it does not
>> really check if postgresql is in the other end of the tunnel,
>> it only check if there is as service (the tunnel) listing on
>> the port on the local machine. Is there another way of
>> pinging the port, to see if postgresql is alive at the other
>> end? If possible, I would like to NOT actually establishing a
>> connection to postgresql like if i used psql -c “select 1;”,
>> to avoid connection overhead.
> 
> This
> 
>   http://www.postgresql.org/docs/devel/static/libpq-connect.html
> 
> talks about ping functionality. Maybe you can use a tiny
> custom piece of code ?

That tiny custom piece of code would be this:

http://www.postgresql.org/docs/current/static/app-pg-isready.html

-- 
Vik Fearing  +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming replication, master recycling

2016-05-11 Thread Sameer Kumar
On Wed, May 11, 2016 at 4:35 PM  wrote:

> I apologise for the missing data.
>
> we are running 9.1.15 on debian servers.
>
>
I think there was a patch in v9.3 which makes sure that if the master has
been shutdown properly (smart or fast mode), it will ensure that pending
wals are replicated before it shutdown. Also, the timeline switch are
written in WAL files since v9.3

So I don't see a reason why a proper switchover with fast shutdown of
master and promotion of standby will cause troubles with v9.3 or greater.

Ofcourse I can be wrong (and naive!) and this does not apply for your case.


when we promote the old slave, it seems to go fine. Are you saying that it
> will cause issues down the line if the previous master is not shut down
> before promoting?
>

You might want to share your recovery.conf on standby node and the
recovery.conf which you add on the lost node (old master) while adding it
as a standby.


>
> I was actually more concerned with the fact that we (some times) recycle
> the old master without doing a full basebackup.
>

I have done with with v9.2 and v9.3 and seems to be working fine. As long
as you have not missed any transactions from master (controlled
switchover). In case you are in a situation where master went down before
it could replicate the last committed transaction, I don't think lost node
(old master) will be able to join the new timeline of standby so your
replication would not work (even though the node has been started up).


> Again, this seems to work, but this presentation seems to indicate that
> this can cause problems (while seeming to work):
> http://hlinnaka.iki.fi/presentations/NordicPGDay2015-pg_rewind.pdf
>
> The note is on page 14, under the headline: "Naive approach".
>
>
> thank you for your support,
> Fredrik
>
> On 11 May 2016 at 12:47:13 +02:00, Venkata Balaji N 
> wrote:
>
>
> On Wed, May 11, 2016 at 2:31 PM,  wrote:
>
> Hi All,
>
> we are currently using streaming replication on multiple node pairs. We
> are seeing some issues, but I am mainly interrested in clarification.
>
> When a failover occurs, we touch the trigger file, promoting the previous
> slave to master. That works perfectly.
>
> For recycling the previous master, we create a recovery.conf (with
> recovery_target_timeline = 'latest') and *try* to start up. If postgresql
> starts up, we accept it as a new slave. If it does not, we proceed with a
> full basebackup.
>
>
> Which version of postgresql you are using ?
>
> You need to shutdown master first, then promote slave and then other way
> round, but, this can be clarified only if you let us know the postgresql
> version. This is quite tricky in 9.2.x and from 9.3.x.
>
> Regards,
> Venkata B N
>
> Fujitsu Australia
>
>
> --
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


Re: [GENERAL] Streaming replication, master recycling

2016-05-11 Thread fredrik
I apologise for the missing data.

we are running 9.1.15 on debian servers.

when we promote the old slave, it seems to go fine. Are you saying that it will 
cause issues down the line if the previous master is not shut down before 
promoting?

I was actually more concerned with the fact that we (some times) recycle the 
old master without doing a full basebackup. Again, this seems to work, but this 
presentation seems to indicate that this can cause problems (while seeming to 
work): 

The note is on page 14, under the headline: "Naive approach".


thank you for your support,

Fredrik
On 11 May 2016 at 12:47:13 +02:00, Venkata Balaji N  wrote:

> 
> 
> On Wed, May 11, 2016 at 2:31 PM, <> wrote:
> 
> > Hi All,
> > 
> > we are currently using streaming replication on multiple node pairs. We are 
> > seeing some issues, but I am mainly interrested in clarification.
> > 
> > When a failover occurs, we touch the trigger file, promoting the previous 
> > slave to master. That works perfectly.
> > 
> > For recycling the previous master, we create a recovery.conf (with 
> > recovery_target_timeline = 'latest') and *try* to start up. If postgresql 
> > starts up, we accept it as a new slave. If it does not, we proceed with a 
> > full basebackup.
> > 
> Which version of postgresql you are using ?
> 
> You need to shutdown master first, then promote slave and then other way 
> round, but, this can be clarified only if you let us know the postgresql 
> version. This is quite tricky in 9.2.x and from 9.3.x.
> 
> Regards,
> Venkata B N
> 
> Fujitsu Australia
>



Re: [GENERAL] Streaming replication, master recycling

2016-05-11 Thread Venkata Balaji N
On Wed, May 11, 2016 at 2:31 PM,  wrote:

> Hi All,
>
> we are currently using streaming replication on multiple node pairs. We
> are seeing some issues, but I am mainly interrested in clarification.
>
> When a failover occurs, we touch the trigger file, promoting the previous
> slave to master. That works perfectly.
>
> For recycling the previous master, we create a recovery.conf (with
> recovery_target_timeline = 'latest') and *try* to start up. If postgresql
> starts up, we accept it as a new slave. If it does not, we proceed with a
> full basebackup.
>

Which version of postgresql you are using ?

You need to shutdown master first, then promote slave and then other way
round, but, this can be clarified only if you let us know the postgresql
version. This is quite tricky in 9.2.x and from 9.3.x.

Regards,
Venkata B N

Fujitsu Australia


Re: [GENERAL] Invalid data read from synchronously replicated hot standby

2016-05-11 Thread martin . kamp . jensen
Sameer Kumar  wrote on 04/21/2016 13:56:52:

> From: Sameer Kumar 
> To: Martin Kamp Jensen/DK/Schneider@Europe, pgsql-general@postgresql.org
> Date: 04/21/2016 14:00
> Subject: Re: [GENERAL] Invalid data read from synchronously 
> replicated hot standby
> 
> 

> On Thu, 21 Apr 2016 04:05 ,  
wrote:
> Hi, 
> 
> We are getting invalid data when reading from a synchronously 
> replicated hot standby node in a 2-node setup. To better understand 
> the situation, we have created a document that provides an overview.
> We are hoping that someone might be able to confirm whether or not 
> the setup makes sense, i.e., whether we are using PostgreSQL 
> correctly and experiencing a bug, or if we are using PostgreSQL 
incorrectly. 
> 
> Link to document that contains a step-by-step description of the 
situation: 
> https://docs.google.com/document/d/1MuX8rq1gKw_WZ-
> HVflqxFslvXNTRGKa77A4NHto4ue0/edit?usp=sharing 
> 
> 
> 
> 
> 
> If the setup is sane (and expected to work), we will work on setting
> up a minimal reproduce that avoids our complete system. We are 
> thinking that a scripted Ansible/Vagrant setup makes sense. 
> 
> I am not sure if it is because of that but you are on an old patch. 
> Upgrade to latest (I guess 9.1.21).

I have reproduced the issue on 9.1.20 which is the latest version for 
Debian 6 (yes, I know, old stuff).

In the mean time, we are preparing a new platform on 9.5.2 where I have 
not been able to reproduce the issue (however, we have introduced a lot of 
changes besides upgrading PostgreSQL). I would have liked to be able to 
come up with a minimal reproduce to be able to reason about the issue but 
I guess we will not pursue that for now.

> 
> Once you have upgraded, re-create the stand by from scratch using a 
> basebackup and then see if the error is still there.
> 
> 
> Best regards, 
> Martin 
> -- 
> --
> Best Regards
> Sameer Kumar | DB Solution Architect 
> ASHNIK PTE. LTD.
> 101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533
> T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com
> 
> __
> This email has been scanned by the Symantec Email Security.cloud 
service.
> __

Re: [GENERAL] Scaling Database for heavy load

2016-05-11 Thread Chris Travers
On Wed, May 11, 2016 at 12:09 PM, Digit Penguin 
wrote:

> Hello,
>
>
> we use PostgreSql 9.x in conjunction with BIND/DNS for some Companies with
> about 1.000 queries per second.
> Now we have to scale the system up to 100.000 queries per second (about).
>
> Bind/DNS is very light and i think can not give us bottleneck.
> The question is how to dimension the backend database.
>
> The queries are select (only few insert or update), but the 100.000
> queries per second are only select.
>
> How can i calculate/dimensionate?
> We think to put mor ethan one Bind Server (with backend database) behinbd
> a router with balancing capabilities.
>
> The problem is to know which requirements and limits does a Postgresql 9.x
> installation - 64 bit - can have.
> Furthermore, we tried Rubyrep (it is quite old!); can you suggest me other
> replication modules that can work also if connction link, from Database
> Server, went down?
>

If they are almost all select queries and a little lag between write and
read visibility is ok, I would recommend Streaming replication, Slony, or
Bucardo and to query against your replicas.  A specific architecture using
one or more of these replication technologies would need to be designed
based on your specific needs of course.

>
> Thank you!
> Francesco
>



-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


[GENERAL] Scaling Database for heavy load

2016-05-11 Thread Digit Penguin
Hello,


we use PostgreSql 9.x in conjunction with BIND/DNS for some Companies with
about 1.000 queries per second.
Now we have to scale the system up to 100.000 queries per second (about).

Bind/DNS is very light and i think can not give us bottleneck.
The question is how to dimension the backend database.

The queries are select (only few insert or update), but the 100.000 queries
per second are only select.

How can i calculate/dimensionate?
We think to put mor ethan one Bind Server (with backend database) behinbd a
router with balancing capabilities.

The problem is to know which requirements and limits does a Postgresql 9.x
installation - 64 bit - can have.
Furthermore, we tried Rubyrep (it is quite old!); can you suggest me other
replication modules that can work also if connction link, from Database
Server, went down?

Thank you!
Francesco


Re: [GENERAL] Lightest way of checking if postgresql is running at the other end of an ssh tunnel?

2016-05-11 Thread Karsten Hilbert
On Wed, May 11, 2016 at 11:17:54AM +0200, Niels Kristian Schjødt wrote:

> We have an ssh connection running from one server to our
> postgresql database on another server. Some times we
> experience that the ssh tunnel does not work anymore and
> needs to be restarted, even though we use the autossh
> package. I would like to write a script that “pings”
> postgresql on the specified port, to check if the connection
> goes through. I have tried with netcat, but it does not
> really check if postgresql is in the other end of the tunnel,
> it only check if there is as service (the tunnel) listing on
> the port on the local machine. Is there another way of
> pinging the port, to see if postgresql is alive at the other
> end? If possible, I would like to NOT actually establishing a
> connection to postgresql like if i used psql -c “select 1;”,
> to avoid connection overhead.

This

http://www.postgresql.org/docs/devel/static/libpq-connect.html

talks about ping functionality. Maybe you can use a tiny
custom piece of code ?

Karsten
-- 
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Lightest way of checking if postgresql is running at the other end of an ssh tunnel?

2016-05-11 Thread Niels Kristian Schjødt
Hi,

We have an ssh connection running from one server to our postgresql database on 
another server. Some times we experience that the ssh tunnel does not work 
anymore and needs to be restarted, even though we use the autossh package. I 
would like to write a script that “pings” postgresql on the specified port, to 
check if the connection goes through. I have tried with netcat, but it does not 
really check if postgresql is in the other end of the tunnel, it only check if 
there is as service (the tunnel) listing on the port on the local machine. Is 
there another way of pinging the port, to see if postgresql is alive at the 
other end? If possible, I would like to NOT actually establishing a connection 
to postgresql like if i used psql -c “select 1;”, to avoid connection overhead.

Any ideas?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general