ERR ORR writes:
>> Queries which use "WHERE "TST_PAYLOAD" LIKE 'SEAT%'" go to the btree
>> index as it should.
>> Queries which use "WHERE "TST_PAYLOAD" LIKE '%EAT%'" *should* use the
>> GIST index but do a full table scan instead.
Are you sure it "should" use the index for that? That query does
Em 22/01/2013 18:04, Tom Lane escreveu:
Edson Richter writes:
Almost all indexed columns of kind varchar in my database require
"varchar_pattern_ops" op class in order to make my LIKE queries more
optmized.
Is there any way to define that this operator class is the default for
my database for v
Edson Richter writes:
> Almost all indexed columns of kind varchar in my database require
> "varchar_pattern_ops" op class in order to make my LIKE queries more
> optmized.
> Is there any way to define that this operator class is the default for
> my database for varchar columns?
No. (Well,
On 01/22/2013 10:59 AM, Alvaro Herrera wrote:
Steve Crawford escribió:
On 01/21/2013 05:02 PM, Tim Uckun wrote:
I already posted the schema earlier. It's a handful of integer fields
with one hstore field.
Oh well. I can't find it but maybe it got lost in shipping or eaten
by a spam filter.
Th
Em 22/01/2013 15:57, Edson Richter escreveu:
I'm rephrasing the question I posted yesterday, because I have used
the wrong terminology and caused confusion (and for instance, got no
response to my question).
Almost all indexed columns of kind varchar in my database require
"varchar_pattern_op
On 01/22/2013 09:57 AM, Rich Shepard wrote:
I neglected to dump a single table before adding additional rows to
it via
psql. Naturally, I messed up the table. I have a full pg_dumpall of all
three databases and all their tables in a single .sql file from 2 days
ago.
The file is 386M in size a
Le mardi 22 janvier 2013 à 18:08 +0800, Roela B Montecillo a écrit :
> > Hi and good afternoon.
> >
> > I would like to inquire if you can help me search about a third party
> > software or any solution that can handle PCI-DSS requirement on PostgreS
> > database access and privileges. I am free to
Steve Crawford escribió:
> On 01/21/2013 05:02 PM, Tim Uckun wrote:
> >I already posted the schema earlier. It's a handful of integer fields
> >with one hstore field.
>
> Oh well. I can't find it but maybe it got lost in shipping or eaten
> by a spam filter.
This is what we have the archives are
On 01/21/2013 06:21 PM, Kevin Grittner wrote:
Kevin Grittner wrote:
...
shared_buffers | 32MB | configuration file
...
I did absolutely no tuning from the default configuration.
But Tim has his shared_buffers set to 1600kB
("shared_buffers";"1600kB";"configuration f
On 01/21/2013 05:02 PM, Tim Uckun wrote:
I already posted the schema earlier. It's a handful of integer fields
with one hstore field.
Oh well. I can't find it but maybe it got lost in shipping or eaten by a
spam filter.
On Tue, Jan 22, 2013 at 1:23 PM, Steve Crawford
wrote:
select * fro
Rich Shepard wrote:
> Is there a way I can extract a single table's schema and data from the
> full backup? If so, I can then drop the fubar'd table and do it correctly
> this time.
If you have a server with enough free space, you could restore
the whole cluster and then selectively dump what you
Cliff de Carteret wrote:
> I have now deleted the copy on the remote wal_archive folder and the
> archiving is now functioning and sending the logs from the local to the
> remote folder. The remote database does not startup and the following is in
> the log:
>
> LOG: database system was shut down
On Tue, 22 Jan 2013, Joshua D. Drake wrote:
Rich, the main problem is using pg_dumpall. Unfortunately pg_dumpall has
not kept up with all the other advances Postgres has had in the last
decade. To set up dump based backups properly I suggest reviewing:
http://www.commandprompt.com/blogs/joshua_
On 01/22/2013 10:07 AM, Rich Shepard wrote:
On Tue, 22 Jan 2013, Rich Shepard wrote:
Is there a way I can extract a single table's schema and data from the
full backup? If so, I can then drop the fubar'd table and do it correctly
this time.
My solution: view the file in the pager I use
On Tue, 22 Jan 2013, Rich Shepard wrote:
Is there a way I can extract a single table's schema and data from the
full backup? If so, I can then drop the fubar'd table and do it correctly
this time.
My solution: view the file in the pager I use (less), then copy relevant
lines to another file
I'm rephrasing the question I posted yesterday, because I have used the
wrong terminology and caused confusion (and for instance, got no
response to my question).
Almost all indexed columns of kind varchar in my database require
"varchar_pattern_ops" op class in order to make my LIKE queries m
I neglected to dump a single table before adding additional rows to it via
psql. Naturally, I messed up the table. I have a full pg_dumpall of all
three databases and all their tables in a single .sql file from 2 days ago.
The file is 386M in size and emacs is taking a very long time to move aro
On 01/22/2013 09:37 AM, Gavin Flower wrote:
On 23/01/13 06:30, Gavan Schneider wrote:
On 01/21/2013 07:40 PM, Steve Crawford wrote:
[...]
(While I suppose some politician somewhere could decide that
"fall-back" could cross date boundaries, I am unaware of any place
that has ever done somethi
On 23/01/13 06:30, Gavan Schneider wrote:
On 01/21/2013 07:40 PM, Gavan Schneider wrote:
[...]
(While I suppose some politician somewhere could decide that
"fall-back" could cross date boundaries, I am unaware of any place
that has ever done something so pathological as to have the same date
On 01/21/2013 07:40 PM, Gavan Schneider wrote:
...
The points raised by Adrain have prompted some more research on my
part and I am intrigued to learn that on one day of the year in many
countries (e.g., Brazil) where daylight conversion happens over
midnight the local-time version of midnight
On 22 January 2013 16:43, Kevin Grittner wrote:
> [Please keep the list copied, and put your reply in-line instead
> of at the top.]
>
> Cliff de Carteret wrote:
> > On 22 January 2013 16:07, Kevin Grittner wrote:
> >
> >> Cliff de Carteret wrote:
> >>
> >>> The current setup has been working su
On 01/21/2013 08:56 PM, Gavan Schneider wrote:
On Monday, January 21, 2013 at 10:53, Steve Crawford wrote:
On 01/21/2013 02:48 PM, Gavan Schneider wrote:
Taking another tangent I would much prefer the default time to be
12:00:00 for the conversion of a date to timestamp(+/-timezone).
P
[Please keep the list copied, and put your reply in-line instead
of at the top.]
Cliff de Carteret wrote:
> On 22 January 2013 16:07, Kevin Grittner wrote:
>
>> Cliff de Carteret wrote:
>>
>>> The current setup has been working successfully for several years
>>> until the recent database crash
>
On 01/22/2013 01:40 AM, Srinath Reddy wrote:
Hi Team,
Please provide me step by step commands for installing pgcrypto in
postgresql 9.2 on linux box.
It depends on how you installed Postgres.
Did you install from source?
Did you use a package?
What distribution?
Regards,
Srinath
--
Adr
The local wal_archive directory is a directory named "wal_archive" which is
at the same file system level as the data directory. The time stamp states
that it has not been used for several years so it looks to be redundant.
To get the database running again I moved an old postgres installation to
Hi Team,
Please provide me step by step commands for installing pgcrypto in postgresql
9.2 on linux box.
Regards,
Srinath
Dear Tom,
Thank you very much for your prompt reply...
I deleted this oid from pg_depend from both objid and refobjid
The problem disappeared
I also increased the following specs to avoid having this problem again:
- kernel.shmall
- kernel.shmmax
- shared_buffers
- max_fsm_pages
- max_fsm_rela
Hi and good afternoon.
I would like to inquire if you can help me search about a third party
software or any solution that can handle PCI-DSS requirement on PostgreS
database access and privileges. I am free to discuss options in case you
already have a known solution that we can implement.
T
On 01/22/2013 07:57 AM, Cliff de Carteret wrote:
There are no files in the pg_xlog directory from before the database
crash except one file which ends in .backup, there are 759 files. In the
pg_xlog/archive_status folder there are 755 files.
The local ../wal_archive folder has no files in it at a
Cliff de Carteret wrote:
> The current setup has been working successfully for several years
> until the recent database crash
What file does the server log say it is trying to archive? What
error are you getting? Does that filename already exist on the
archive (or some intermediate location used
There are no files in the pg_xlog directory from before the database crash
except one file which ends in .backup, there are 759 files. In the
pg_xlog/archive_status folder there are 755 files.
The local ../wal_archive folder has no files in it at all whereas the
remote wal_archive folder has 147 fi
On 01/22/2013 06:59 AM, Cliff de Carteret wrote:
Hello, thanks for your response.
The xlogs that are filling up are in the original data directory and the
wal_keep_segments is commented out on my master and slave postgresql.conf
It is expected that there may a certain increase in WAL files.
I
Hi Marc,
Thanks a lot. That works fine. The names 'NEW' and 'OLD' works fine.
Leif
- "Marc Schablewski" wrote:
> Hi Leif,
>
> Am 22.01.2013 14:34, schrieb Leif Jensen:
>
>
> CREATE update_rule_func( old record, new record ) AS ... I am told,
> that I cannot use record for the
Hello, thanks for your response.
The xlogs that are filling up are in the original data directory and the
wal_keep_segments is commented out on my master and slave postgresql.conf
Cliff
On 22 January 2013 14:48, Adrian Klaver wrote:
> On 01/22/2013 03:12 AM, Cliff de Carteret wrote:
>
>> My d
On 01/22/2013 03:12 AM, Cliff de Carteret wrote:
My database crashed a couple of days ago during an upgrade several
seconds after committing a large transaction to the database. Eventually
we found out that this was due to the disk being full as the transaction
had created several gigs of data. A
Hey Marc,
2013/1/22 Marc Schablewski
>
> Am 22.01.2013 14:59, schrieb Dmitriy Igrishin:
>
> Hey all,
>
> Is there way to turn off printing of CONTEXT field of the error report
>
> I think, this might help:
> http://www.depesz.com/2008/07/12/suppressing-context-lines-in-psql/
>
> Marc
>
> I
On Tue, Jan 22, 2013 at 8:07 AM, Merlin Moncure wrote:
> On Sat, Jan 5, 2013 at 12:20 PM, ERR ORR wrote:
>>
>> @Moderators: I am reposting this because the original from 22 December
>> apparently didn't arrive on the list.
>>
>> I was trying to make Postgresql use a trigram gist index on a varcha
On Sat, Jan 5, 2013 at 12:20 PM, ERR ORR wrote:
>
> @Moderators: I am reposting this because the original from 22 December
> apparently didn't arrive on the list.
>
> I was trying to make Postgresql use a trigram gist index on a varchar field,
> but to no avail.
>
> Specifically, I was trying to r
Am 22.01.2013 14:59, schrieb Dmitriy Igrishin:
> Hey all,
>
> Is there way to turn off printing of CONTEXT field of the error report
>
I think, this might help:
http://www.depesz.com/2008/07/12/suppressing-context-lines-in-psql/
Marc
Hey all,
Is there way to turn off printing of CONTEXT field of the error report
in the following case:
create or replace function foo() returns void language plpgsql as $$
begin
raise notice 'notice from foo()';
end;
$$;
create or replace function bar() returns void language plpgsql as $$
begin
Hi Leif,
Am 22.01.2013 14:34, schrieb Leif Jensen:
> CREATE update_rule_func( old record, new record ) AS ... I am told, that I
> cannot use record for the parameter type. Could you please
You should use your view instead of 'record' as parameter type, i.e. CREATE
update_rule_func( old
V_YOUR
Hi Jasen.
Thank you for your response (also thank you to Tom).
I have now tried your suggestion, but I'm not sure how you have implemented
the plpgsql function. When I create the function: CREATE update_rule_func( old
record, new record ) AS ... I am told, that I cannot use record for
My database crashed a couple of days ago during an upgrade several seconds
after committing a large transaction to the database. Eventually we found
out that this was due to the disk being full as the transaction had created
several gigs of data. A day or so later the disk is full again and
Postgre
On 5 January 2013 20:20, ERR ORR wrote:
>
>
>
> Queries which use "WHERE "TST_PAYLOAD" LIKE 'SEAT%'" go to the btree
> index as it should.
> Queries which use "WHERE "TST_PAYLOAD" LIKE '%EAT%'" *should* use the
> GIST index but do a full table scan instead.
> (I am looking for names like 'SEATTLE
44 matches
Mail list logo