On Fri, May 15, 2015 at 10:51:15AM -0400, Bruce Momjian wrote:
> On Fri, May 15, 2015 at 10:49:43AM -0400, Stephen Frost wrote:
> > Bruce,
> >
> > * Bruce Momjian (br...@momjian.us) wrote:
> > > On Mon, Mar 9, 2015 at 12:43:05PM -0400, Bruce Momjian wrote:
> > > > On Fri, Mar 6, 2015 at 06:10:15
On Fri, May 15, 2015 at 9:18 AM, Job wrote:
> Hello,
>
> i have a table of about 10 millions of records, with the index on a string
> field.
> Actually is alphabetical; since queries are about 100/200 per seconds, i was
> looking for a better way to improve performance and reduce workload.
>
> T
Hello Francesco,
You should probably set timing on, run an explain analyze, and use pgbadger
to diagnose your performance issue.
While it may be the case that comparison in the index might be slightly
faster because of the modulo arithmetic, those in-memory operations are
extremely fast and it is
Thank for that comprehensive response!
And you are right about practicing restore, I never had to :-)
However, I use pg_dump on a regular basis (custom format) but I did not know
the difference between database/database cluster (and pg_dumpall) until I had
to move everything because the PGDATA
Hello,
I recently installed a PostgreSQL server to this spec:
server v9.3.6
EnterpriseDB mongo_fdw vREL-4_0_0
libbson v1.1.5
mongo C driver v1.1.5
and Mongo is at 2.7.1. Mapping fields in Mongo documents, including _id,
has been successful, with the exception of nested fields. Assuming my Mongo
Yes that's my suggestion. Btree-Gin deals with lots of repeated values much
better than the Btree index as repeated keys are only stored once.
Em 15/05/2015 12:38, "Job" escreveu:
> Hello Arthur!
>
> So, i read that btree-gin have got "the ability to enforce uniqueness".
>
> If in this 10.millio
HI Daniel:
On Fri, May 15, 2015 at 5:35 PM, Daniel Begin wrote:
> Bonjour Francisco.
Buenos dias.
> Skimming the documentation sequentially is a cleaver advice, especially since
> the doc is much of the time well done and exhaustive. Unfortunately, even if
> I actually did it about 1 year ago,
Are you saying your indexed field has only 50 distinct values? Seems a horrible
candidate for an index. Might be good to partition on those fifty values but
ten million records probably doesn't warrant that.
Sent from my iPhone
> On May 15, 2015, at 9:34 AM, Job wrote:
>
> Hello Arthur!
>
>
Hello Arthur!
So, i read that btree-gin have got "the ability to enforce uniqueness".
If in this 10.millions long table i have, in index, 50 recurring values, i can
leave the alphabetical field and change to btree-gin the index on it?!
Thank you!
Francesco
Da:
Bonjour Francisco.
Skimming the documentation sequentially is a cleaver advice, especially since
the doc is much of the time well done and exhaustive. Unfortunately, even if I
actually did it about 1 year ago, it seems this specific item slipped out of my
mind :-(
About dump/restore operation,
You should probably experiment with a btree-gin index on those.
Em 15/05/2015 12:22, "Job" escreveu:
> Hello,
>
> i have a table of about 10 millions of records, with the index on a string
> field.
> Actually is alphabetical; since queries are about 100/200 per seconds, i
> was looking for a bett
Hello,
i have a table of about 10 millions of records, with the index on a string
field.
Actually is alphabetical; since queries are about 100/200 per seconds, i was
looking for a better way to improve performance and reduce workload.
The unique values, of that fields, are about the 50 (categor
On Fri, May 15, 2015 at 10:49:43AM -0400, Stephen Frost wrote:
> Bruce,
>
> * Bruce Momjian (br...@momjian.us) wrote:
> > On Mon, Mar 9, 2015 at 12:43:05PM -0400, Bruce Momjian wrote:
> > > On Fri, Mar 6, 2015 at 06:10:15PM -0500, Stephen Frost wrote:
> > > > The first is required or anyone who
Bruce,
* Bruce Momjian (br...@momjian.us) wrote:
> On Mon, Mar 9, 2015 at 12:43:05PM -0400, Bruce Momjian wrote:
> > On Fri, Mar 6, 2015 at 06:10:15PM -0500, Stephen Frost wrote:
> > > The first is required or anyone who has done that will get the funny
> > > error that started this thread and t
On Fri, May 15, 2015 at 01:10:27PM +0200, Michael Meskes wrote:
> On 14.05.2015 19:35, Bruce Momjian wrote:
> > On Fri, May 31, 2013 at 02:26:08PM +0200, Leif Jensen wrote:
> >>Hi guys.
> >>
> >>In the ECPG manual (including latest 9.1.9) about ECPG SQL SET
> >> CONNECTION ; it is stated t
I think I spotted the problem today, I am missing a role on node2.
Is there a pointer to or can you provide list of steps to take for the manual
cleanup mentioned the log file. I am assuming I just need to remove the
relevant entries in the bdr tables just on node2 in my case. Is that correct
On node1:
apimgtdb=# SELECT * FROM bdr.bdr_nodes
apimgtdb-# ;
node_sysid | node_timeline | node_dboid | node_status | node_name |
node_local_dsn
| node_init_from_dsn
-+---++--
Sachin Srivastava wrote:
> How can I fast my daily pg_dump backup. Can I use parallel option(Which is
> introduced in Postgres 9.3)
> with Postgres 9.1. There is any way I can use this is for 9.1 database.
You cannot do that.
Switch to file system backup, that is much faster.
Yours,
Laurenz Alb
On Fri, May 15, 2015 at 8:54 PM, Mihamina Rakotomandimby
wrote:
> On 05/15/2015 02:46 PM, Sachin Srivastava wrote:
>> How can I fast my daily pg_dump backup. Can I use parallel option(Which is
>> introduced in Postgres 9.3) with Postgres 9.1. There is any way I can use
>> this is for 9.1 database.
On 05/15/2015 02:46 PM, Sachin Srivastava wrote:
Hi,
How can I fast my daily pg_dump backup. Can I use parallel
option(Which is introduced in Postgres 9.3) with Postgres 9.1. There
is any way I can use this is for 9.1 database.
IMHO, if has been introduced in 9.3, it is not in 9.1, unless
Hi,
How can I fast my daily pg_dump backup. Can I use parallel option(Which is
introduced in Postgres 9.3) with Postgres 9.1. There is any way I can use
this is for 9.1 database.
My database size is 820 GB and it’s taking 7 hours to complete.
*Postgres Version: 9.1.2*
*PogtGIS: 1.5*
On 14.05.2015 19:35, Bruce Momjian wrote:
> On Fri, May 31, 2013 at 02:26:08PM +0200, Leif Jensen wrote:
>>Hi guys.
>>
>>In the ECPG manual (including latest 9.1.9) about ECPG SQL SET CONNECTION
>> ; it is stated that "This is not thread-aware".
>>
>>When looking in the ecpg library co
Hi Sachin,
2015-05-15 11:35 GMT+02:00 Sachin Srivastava :
> Dear Concern,
>
> When I am installing PostgreSQL version 9.1 with PostGIS 1.5 then it's
> creating "template_postgis" database by default.
>
> But when I am installing below PostgreSQL version 9.3 with PostGIS 2.1.7
>
> postgresql-9.3.6
I use this script, run nightly via crontab, on my small pgsql servers.
it runs as the postgres user.
#!/bin/bash
/usr/pgsql-9.3/bin/pg_dumpall --globals-only | gzip >
/home2/backups/pgsql/pgdumpall.globals.`date +\%a`.sql.gz
for db in $(psql -tc "select datname from pg_database where not
dati
Dear Concern,
When I am installing PostgreSQL version 9.1 with PostGIS 1.5 then it's
creating "template_postgis" database by default.
But when I am installing below PostgreSQL version 9.3 with PostGIS 2.1.7
postgresql-9.3.6-2-windows-x64
postgis-bundle-pg93x64-setup-2.1.7-1
And PostgreSQL versi
Hi Daniel:
On Wed, May 13, 2015 at 8:06 PM, Daniel Begin wrote:
...
> - I still have a lot to learn on database management (it was simpler on
> user's side!-)
Yep, we all do, even if we've been using it since it was called Postgres.
> Fortunately, I have found that pg_dumpall could do the job
On 15 May 2015 at 04:26, Dennis wrote:
>
> What am I missing? How are the steps different from setting database
> replication?
>
>
Please show the log output from both nodes, and the contents of "SELECT *
FROM bdr.bdr_nodes" and "SELECT * FROM bdr.bdr_connections" on each node.
--
Craig Ring
27 matches
Mail list logo