* greigw...@comcast.net (greigw...@comcast.net) wrote:
> 2) Setup a new account in AD and used ktpass to create a keytab file for the
> SPN.
Did you make sure to use the right service name when creating the
keytab? Can you do a klist -k on the keytab file and send the output?
Does hostname --fq
Robert Gravsj?? wrote:
> >> I am for #1, not so much for #2, mainly on the grounds of size. But
> >> given #1 it would be possible for packagers to make their own choices
> >> about whether to include plain-text docs.
> >
> > Wouldn't it suffice to make it downloadable, like the pdf doc?
>
> And/
Michael Leib writes:
> I'm using v8.4.4 and have an application written using libpq
> in Asynchronous Command Mode and primarily dealing with the
> COPY related apis. I have been successful in getting my application
> working, but have come across an issue that I would like to determine
> if I hav
Hi -
I'm using v8.4.4 and have an application written using libpq
in Asynchronous Command Mode and primarily dealing with the
COPY related apis. I have been successful in getting my application
working, but have come across an issue that I would like to determine
if I have done something wrong (pr
Yea this is a valid point. It's very possible my design won't work
for the long term, and at some point I'll have to store the email name
exactly as it was entered, and allow the lookup logic to be case
insensitive with a lowercase index. However, I think the way I have
it now should not break an
I'm trying to get my PostgreSQL server on Linux configured so that I can
connect from a Windows client using GSS Authentication against Active
Directory. I found some helpful references on how to do this, but I'm still
coming up short. To summarize what I've done so far by way of configuration:
On Fri, Jun 11, 2010 at 08:43:53AM +0200, Adrian von Bidder wrote:
>
> Just speculation, I've not tried this. Perhaps pipe the output of pg_dump
> through a software that bandwidth-limits the throughput?
Perhaps. However, moving the pg_dump to a Slony slave has solved my problem.
Thanks!!
Al
I was intrigued to see Chris Bohn's page about PgMQ ("Embedding
messaging in PostgreSQL") on the PGCon website at
http://www.pgcon.org/2010/schedule/events/251.en.html
I have also had a look at the pgfoundry site at
http://pgfoundry.org/projects/pgmq/ -- its empty.
I've tried to email Chris to
Hi,
Use one of the existent replication systems
http://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling#Replication
p.s. I would highlight Slony, Londiste and Bucardo.
On 11 June 2010 14:11, Ulas Albayrak wrote:
> Hi,
>
>
>
> I’m in the process of moving our production
On Fri, Jun 11, 2010 at 2:18 PM, Tom Lane wrote:
> Scott Marlowe writes:
>> But that runs a shell command, how's that supposed to get the
>> search_path? I've been trying to think up a solution to that and
>> can't come up with one.
>
> Yeah, and you do *not* want the prompt mechanism trying to
Scott Marlowe writes:
> But that runs a shell command, how's that supposed to get the
> search_path? I've been trying to think up a solution to that and
> can't come up with one.
Yeah, and you do *not* want the prompt mechanism trying to send SQL
commands...
regards, tom
Leif Biberg Kristensen skrev 2010-06-10 17.33:
On Thursday 10. June 2010 17.24.00 Tom Lane wrote:
Alvaro Herrera writes:
Excerpts from Peter Eisentraut's message of jue jun 10 02:50:14 -0400
2010:
As I said back then, doing this is straightforward, but we kind of need
more than one user wh
On Thu, Jun 10, 2010 at 04:00:54PM -0400, Greg Smith wrote:
>> 5. Does anybody know if I can set dirty_background_ratio to 0.5? As we
>> have 12 GB RAM and rather slow disks 0,5% would result in a maximum of
>> 61MB dirty pages.
>
> Nope. Linux has absolutely terrible controls for this critic
On 06/11/2010 11:10 AM, Scott Marlowe wrote:
On Fri, Jun 11, 2010 at 11:29 AM, Adrian Klaver wrote:
On 06/11/2010 10:23 AM, Joshua Tolley wrote:
On Wed, Jun 09, 2010 at 05:52:49PM +0900, Schwaighofer Clemens wrote:
Hi,
I am trying to figure out how I can show the current search_path, or
be
On Fri, Jun 11, 2010 at 11:29 AM, Adrian Klaver wrote:
> On 06/11/2010 10:23 AM, Joshua Tolley wrote:
>>
>> On Wed, Jun 09, 2010 at 05:52:49PM +0900, Schwaighofer Clemens wrote:
>>>
>>> Hi,
>>>
>>> I am trying to figure out how I can show the current search_path, or
>>> better the first search_pat
On Wed, Jun 9, 2010 at 2:52 AM, Schwaighofer Clemens
wrote:
> Hi,
>
> I am trying to figure out how I can show the current search_path, or
> better the first search_path entry (the active schema) in the PROMPT
> variable for psql.
>
> Is there any way to do that? I couldn't find anything useful ..
Thanks everyone,
I will wait for Postgres 9.0 to implement this feature then. Thanks
Thanks
Deepak
On Fri, Jun 11, 2010 at 10:30 AM, Joshua Tolley wrote:
> On Thu, Jun 10, 2010 at 06:01:24PM -0700, DM wrote:
> >How to force postgres users to follow password standards and renewal
> >pol
On Thu, Jun 10, 2010 at 06:01:24PM -0700, DM wrote:
>How to force postgres users to follow password standards and renewal
>policies?
>Thanks
>Deepak
9.0 will ship with a contrib module called "passwordcheck" which will enforce
some of these things, FWIW.
--
Joshua Tolley / eggykna
On 06/11/2010 10:23 AM, Joshua Tolley wrote:
On Wed, Jun 09, 2010 at 05:52:49PM +0900, Schwaighofer Clemens wrote:
Hi,
I am trying to figure out how I can show the current search_path, or
better the first search_path entry (the active schema) in the PROMPT
variable for psql.
Is there any way t
On Wed, Jun 09, 2010 at 05:52:49PM +0900, Schwaighofer Clemens wrote:
> Hi,
>
> I am trying to figure out how I can show the current search_path, or
> better the first search_path entry (the active schema) in the PROMPT
> variable for psql.
>
> Is there any way to do that? I couldn't find anythin
Janning wrote:
most docs I found relates to 8.2 and 8.3. In Things of checkpoints, is 8.4
comparable to 8.3? It would be nice if you update your article to reflect 8.4
There haven't been any changes made in this area since 8.3, that's why
there's been no update. 8.4 and 9.0 have exactly the s
On 06/10/2010 11:43 PM, Adrian von Bidder wrote:
Just speculation, I've not tried this. Perhaps pipe the output of pg_dump
through a software that bandwidth-limits the throughput? (I don't know if
such a command exists,
pv (pipe view)
Allows you to monitor rate of transfers through a pip
On Thu, 10 Jun 2010 13:50:23 -0700, Mike Christensen wrote:
> I have a column called "email" that users login with, thus I need to
> be able to lookup email very quickly. The problem is, emails are
> case-insensitive. I want f...@bar.com to be able to login with
> f...@bar.com as well. There's t
Hi,
I'm in the process of moving our production database to a different physical
server, running a different OS and a newer release of postgreSQL. My problem is
that I'm not really sure how to go about it.
My initial idea was to use WAL archiving to reproduce the db on the new server
and then
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
> My best idea so far is to do a pg_dump and somehow archive all the DML
> in the original db from that point in time for later insertion in the
> new db, but I dont know how that would be done practically. And I
> dont even know if thats the be
> Well the situation is still ambiguous
> so:
> Is it possible to provide this table and indexes definitions?
> And it
> would be great it you describe the queries you are going to do
> on this table
> or just provide the SQL.
Sure!
Basically what I'm trying to do is to partition the index in
> Are there any plans to allow PL/pgSQL functions to be nested like Oracle
> allows with PL/SQL procedures?
>
> If not, what are the best ways to convert PL/SQL nested procedures to
> PL/pgSQL?
>
>
PostgreSQL plus advanced server (which is a proprietary derivative of
postgres) has oracle compatibi
On 11 June 2010 16:29, Leonardo F wrote:
>
>> Could you please explain the reason to do so many
>> partitions?
>
>
> Because otherwise there would be tons of rows in each
> partition, and randomly "updating" the index for that many
> rows 2000 times per second isn't doable (the indexes
> get so bi
> Could you please explain the reason to do so many
> partitions?
Because otherwise there would be tons of rows in each
partition, and randomly "updating" the index for that many
rows 2000 times per second isn't doable (the indexes
get so big that it would be like writing a multi-GB file
random
On 11 June 2010 13:00, Leonardo F wrote:
> a) create 480 partitions, 1 for each hour of the day. 2 indexes on each
> partition
> b) create 20 partitions, and create 24*2 partial indexes on the current
> partition; then the next day (overnight) create 2 global indexes for the
> table and drop the 2
On Thu, Jun 10, 2010 at 6:59 PM, J. Greg Davidson wrote:
> Hi fellow PostgreSQL hackers!
>
> I tried to write an SQL glue function to turn an array
> of alternating key/value pairs into an array of arrays
> and got the message
>
> ERROR: 42704: could not find array type for data type text[]
I do
On 11/06/2010 11:24, Ulas Albayrak wrote:
> My initial idea was to use WAL archiving to reproduce the db on the
> new server and then get it up to date with the logs from the time of
> base backup creation to the time the new server can get up. That was
> until I found out WAL archiving doesn’t wo
Hi,
I’m in the process of moving our production database to a different
physical server, running a different OS and a newer release of
postgreSQL. My problem is that I’m not really sure how to go about it.
My initial idea was to use WAL archiving to reproduce the db on the
new server and then get
HI all,
I have a very big table (2000 inserts per sec, I have to store 20 days of data).
The table has 2 indexes, in columns that have almost-random values.
Since keeping those two indexes up-to-date can't be done (updating 2000
times per second 2 indexes with random values on such a huge table
On Thursday 10 June 2010 22:00:54 Greg Smith wrote:
> Janning wrote:
> > 1. With raising checkpoint_timeout, is there any downgrade other than
> > slower after-crash recovery?
>
> Checkpoint spikes happen when too much I/O has been saved up for
> checkpoint time than the server can handle. While t
Yup, I actually ended up doing this with this constraint:
ALTER TABLE Users ADD CONSTRAINT check_email CHECK (email ~ E'^[^A-Z]+$');
However, I like your version better so I'll use that instead :)
Mike
On Thu, Jun 10, 2010 at 11:48 PM, Adrian von Bidder
wrote:
> Heyho!
>
> On Thursday 10 June
36 matches
Mail list logo