May Users forcely assign a table / database / cluster storage in RAM
purely ?
or a in-directly-way , like making a RAM-Disk-Device and assign this
device as a postgreSQL cluster?
I think this feature will push a lot High-Performance usage ,
any suggestion ?
jihuang
Gaetano Mendola wrote:
Alb
"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> Why are you running a vacuum every 45 seconds? Increase your fsm_pages and
> run it every hour.
If I understood his description correctly, he's turning over 10% of a
500-row table every minute. So waiting an hour would mean 3000 dead
rows in a 500-l
Hello,
Perhaps you could provide some more detailed information?
Example of queries?
Type of hardware?
Operating system?
Why are you running a vacuum every 45 seconds? Increase your fsm_pages and
run it every hour.
Are you sure the vacuums are trampling eachother and thus getting more
than one
vac
Andreas Pflug <[EMAIL PROTECTED]> writes:
> For adminstrator's convenience, I'd like to see a function that returns
> the serverlog.
What do you mean by "returns the serverlog"? Are you going to magically
recover data that has gone to stderr or the syslogd daemon? If so, how?
And why wouldn't y
For adminstrator's convenience, I'd like to see a function that returns
the serverlog.
Are there any security or other issues that should prevent me from
implementing this?
Regards,
Andreas
---(end of broadcast)---
TIP 6: Have you searched our lis
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Fabrizio Mazzoni asked:
>>> How can i find out if a prepared statement already exists..? Is
>>> there a function or a query i can execute ..??
Greg replied:
>> I have not seen an answer to this, and I am curious as well. Anyone?
Alvaro Herrera
Thomas Hallgren wrote:
Some very good suggestions where made here. What happens next? Will this end
up in a TODO list where someone can "claim the task" (I'm trying to learn
how the process works) ?
If someone doesn't jump right on it and make a "diff -c" proposal, it
probably belongs on the TODO
Hi
I am using Postgres 7.3.4 over linux Redhat 7.3 on
i686 machine.
My app has one parent table and five child tables. I
mean the parent table has a primary key and child
tables have foreign key relationship with parent.
My App is doing 500 inserts initially in each table.
After all this done, w
DBMS like MySQL and hsqldb (the only two I know that can keep and
process tables on the heap) have a CREATE DATABASE case in which the
'database' is specified to reside in memory, that is RAM. For some
data handling cases for which persistence is not important, this is
all you need.
These types o
The link you have down there is not the one on the site. All of the
links to that file work just fine for me on the live site.
Jan Wieck wrote:
On 6/4/2004 4:47 AM, Karel Zak wrote:
On Fri, Jun 04, 2004 at 01:01:19AM -0400, Jan Wieck wrote:
Yes, Slonik's,
it't true. After nearly a year the Slony
Shachar Shemesh <[EMAIL PROTECTED]> writes:
> One problem detected during that stage, however, was that the program
> pretty much relies on the collation being case insensitive. I am now
> trying to gather the info regarding adding case preserving to
> Postgresql. I already suggested that we do
Albretch wrote:
After RTFM and googling for this piece of info, I think PostgreSQL
has no such a feature.
Why not?
. Isn't RAM cheap enough nowadays? RAM is indeed so cheap that you
could design diskless combinations of OS + firewall + web servers
entirely running off RAM. Anything needing per
On Sun, 2004-06-06 at 10:32, Jan Wieck wrote:
> You are right. The "local" slon node checks every "-s" milliseconds
> (commandline switch) if the sequence sl_action_seq has changed, and if
> so generate a SYNC event. Bumping a sequence alone does not cause this,
> only operations that invoke the
Hi list,
A postgresql migration I am doing (the same one for which the OLE DB
driver was written) has finally passed the proof-of-concept stage
(phew). I now have lots and lots of tidbits, tricks and tips for SQL
Server migration, which I would love to put online. Is pgFoundry the
right place?
On 6/6/2004 5:21 AM, Jeff Davis wrote:
I have two nodes, node 1 and node 2.
Both are working with node 1 as the master, and data from subscribed
tables is being properly replicated to node 2.
However, it looks like there's a possible bug with sequences. First let
me explain that I don't entirely
> Sailesh Krishnamurthy <[EMAIL PROTECTED]> writes:
>> This is probably a crazy idea, but is it possible to organize the data
>> in a page of a hash bucket as a binary tree ?
>
> Only if you want to require a hash opclass to supply ordering operators,
> which sort of defeats the purpose I think. H
Tom Lane wrote:
> However, it seems that the real problem here is that we are so far off
> base about how many files we can open. I wonder whether we should stop
> relying on sysconf() and instead try to make some direct probe of the
> number of files we can open. I'm imagining repeatedly open()
I have two nodes, node 1 and node 2.
Both are working with node 1 as the master, and data from subscribed
tables is being properly replicated to node 2.
However, it looks like there's a possible bug with sequences. First let
me explain that I don't entirely understand how a replicated sequence
With 7.5 feature freeze coming nearer, administrative interface
developers probably would like a list what new features they should
support, i.e. which ddl features where added. Here's a list of relevant
changes as far as I extracted them, please complete.
- $ Quoting
- TABLESPACE
- ALTER TABLE
Found the problem. If I have a very long environment variable exported
and I start PG, PG crashes when I try to load PG/Tcl. In my case I use
color ls and I have a very long LS_COLORS environment variable set.
I have duplicated the problem by renaming my .bashrc and logging back
in. With thi
20 matches
Mail list logo