Re: [HACKERS] [pgsql-advocacy] Not 7.5, but 8.0 ?

2004-06-06 Thread David Garamond
Tom Lane wrote:
Granted, the script itself is faulty, but since some other OS projects 
(like Ruby, with the same x.y.z numbering) do guarantee they never will 
have double digits in version number component
Oh?  What's their plan for the release after 9.9.9?
As for Ruby, it probably won't expect  9.9.9 in any foreseeable future. 
It takes +- 10 years to get to 1.8.1. Same with Python. But Perl will 
have 5.10.0.

--
dave
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [HACKERS] Mac OS X, PostgreSQL, PL/Tcl

2004-06-06 Thread Scott Goodwin
Found the problem. If I have a very long environment variable exported  
and I start PG, PG crashes when I try to load PG/Tcl. In my case I use  
color ls and I have a very long LS_COLORS environment variable set.

I have duplicated the problem by renaming my .bashrc and logging back  
in. With this clean environment, I started PG and loaded PG/Tcl without  
any problems. I then created the following environment variable on the  
command line:

LONG_VAR=aa:bbb:cc: 
ddd:eee:fff: 
g::iii: 
j:kk:: 
mmm:n: 
ooo:pp:qqq: 
rrr:ss: 
ttt:u: 
vv:ww: 
xxx:y: 
zzz

and exported it. (Obviously the line above is going to be broken into  
multiple lines by the mailer...).

Then I stopped and restarted PG, loaded PG/Tcl and PG crashed. You  
*must* stop and restart PG for the problem to exhibit itself, otherwise  
it won't pick up the change in the environment. I suspect I'm running  
into a buffer overflow situation.

Ok, it fails consistently when LONG_VAR is 523 characters or greater;  
works consistently when LONG_VAR is 522 characters or smaller. Might  
not fail at the same number for others.

/s.

 To prove that this was the problem, I cleaned out my environment by  
moving my .bashrc file to another name, logged out, logged in, start
On Feb 21, 2004, at 1:51 AM, Tom Lane wrote:

Scott Goodwin [EMAIL PROTECTED] writes:
Hoping someone can help me figure out why I can't get PL/Tcl to load
without crashing the backend on Mac OS 10.3.2.
FWIW, pltcl seems to work for me.  Using up-to-date Darwin 10.3.2
and PG CVS tip, I did
configure --with-tcl --without-tk
then make, make install, etc.  pltcl installs and passes its regression
test.
psql:/Users/scott/pgtest/add_languages.sql:12: server closed the
connection unexpectedly
 This probably means the server terminated abnormally
 before or while processing the request.
Can you provide a stack trace for this?
regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[HACKERS] pgsql 7.5 frontend changes wrapup

2004-06-06 Thread Andreas Pflug
With 7.5 feature freeze coming nearer, administrative interface 
developers probably would like a list what new features they should 
support, i.e. which ddl features where added. Here's a list of relevant 
changes as far as I extracted them, please complete.

- $ Quoting
- TABLESPACE
- ALTER TABLE ALTER COLUMN TYPE
- COMMENT ON CAST/CONVERSION/LANGUAGE
Regards,
Andreas
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Too-many-files errors on OS X

2004-06-06 Thread Kevin Brown
Tom Lane wrote:
 However, it seems that the real problem here is that we are so far off
 base about how many files we can open.  I wonder whether we should stop
 relying on sysconf() and instead try to make some direct probe of the
 number of files we can open.  I'm imagining repeatedly open() until
 failure at some point during postmaster startup, and then save that
 result as the number-of-openable-files limit.

I strongly favor this method.  In particular, the probe should probably
be done after all shared libraries have been loaded and initialized.

I originally thought that each shared library that was loaded would eat
a file descriptor (since I thought it would be implemented via mmap())
but that doesn't seem to be the case, at least under Linux (for those
who are curious, you can close the underlying file after you perform
the mmap() and the mapped region still works).  If it's true under any
OS then it would certainly be prudent to measure the available file
descriptors after the shared libs have been loaded (another reason is
that the init function of a library might itself open a file and keep
it open, but this isn't likely to happen very often).

 I also notice that OS X 10.3 seems to have working SysV semaphore
 support.  I am tempted to change template/darwin to use SysV where
 available, instead of Posix semaphores.  I wonder whether inheriting
 100-or-so open file descriptors every time we launch a backend isn't
 in itself a nasty performance hit, quite aside from its effect on how
 many normal files we can open.

I imagine this could easily be tested.  I rather doubt that the
performance hit would be terribly large, but we certainly shouldn't rule
it out without testing it first.


-- 
Kevin Brown   [EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Why hash indexes suck

2004-06-06 Thread pgsql
 Sailesh Krishnamurthy [EMAIL PROTECTED] writes:
 This is probably a crazy idea, but is it possible to organize the data
 in a page of a hash bucket as a binary tree ?

 Only if you want to require a hash opclass to supply ordering operators,
 which sort of defeats the purpose I think.  Hash is only supposed to
 need equality not ordering.


A btree is frequently used within the buckets of a hash table, expecially
if you expect to have a large number of items in each bucket.

If PostgreSQL could create a hash table index which is a single top level
hash table with each hash bucket being a btree index. You can eliminate a
number of btree searches by hashing, and then fall into btree performance
after the first hash lookup. The administrator should be able to gather
statistics about the population of the hash buckets and rehash if
performance begins to behave like a btree or the data is not distributed
evenly. Given proper selection of the initial number of buckets, a hash
table could blow btree out of the water. Given a poor selection of the
number of buckets, i.e. 1, a hash will behave no worse than a btree.

Also, it would be helpful to be able to specify a hash function during the
create or rehash, given a specific class of data, extraction of the random
elements can be more efficient and/or effective given knowledge of the
data. Think something like bar codes, there are portions of the data
which are ususally the same and portions of the data which are usually
different. Focusing on the portions of the data which tend to be different
will generally provide a more evenly distributed hash.









---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] Slony-I goes BETA (possible bug)

2004-06-06 Thread Jan Wieck
On 6/6/2004 5:21 AM, Jeff Davis wrote:
I have two nodes, node 1 and node 2. 

Both are working with node 1 as the master, and data from subscribed
tables is being properly replicated to node 2.
However, it looks like there's a possible bug with sequences. First let
me explain that I don't entirely understand how a replicated sequence is
expected to behave, but as far as this report is concerned, I assume
that if you do a nextval() on node 1, than SELECT last_value FROM
test_seq on node 2 will return the updated value.
It looks like the sequence value is not updated on node 2, until some
other event happens, like doing an UPDATE on a replicated table on node
1.
You are right. The local slon node checks every -s milliseconds 
(commandline switch) if the sequence sl_action_seq has changed, and if 
so generate a SYNC event. Bumping a sequence alone does not cause this, 
only operations that invoke the log trigger on replicated tables do.

Speaking of this, this would also mean that there is a gap between the 
last sl_action_seq bumping operation and the commit of that transaction. 
If the local slon will generate the sync right in that gap, the changes 
done in that transaction will not be replicated until the next 
transaction triggers another sync.

I am not sure how to effectively avoid this problem without blindly 
creating SYNC events in a maybe less frequent interval. Suggestions?

Jan
I already have a table t2 which is properly replicating.
So, here's what I give to slonik to add the sequence to set 1:
slonik _EOF_
cluster name = $CLUSTER;
node 1 admin conninfo = 'dbname=$DBNAME1 host=$HOST1 user=$SLONY_USER';
node 2 admin conninfo = 'dbname=$DBNAME2 host=$HOST2 user=$SLONY_USER';
create set (id=34, origin=1, comment='set 34');
set add sequence (set id = 34, origin = 1, id = 35, full qualified
name='public.test_seq', comment = 'sequence test');
subscribe set (id=34,provider=1,receiver=2,forward=no);
merge set (id=1,add id = 34, origin=1);
subscribe set (id=1,provider=1,receiver=2,forward=no);
_EOF_
Note: results of the query are put after the -- following the query
for easier readability.
node1= SELECT last_value FROM test_seq; -- 1
node2= SELECT last_value FROM test_seq; -- 1
node1= SELECT nextval('test_seq'); -- 1
node1= SELECT nextval('test_seq'); -- 2
node1= SELECT nextval('test_seq'); -- 3
node1= SELECT last_value FROM test_seq; -- 3
node2= SELECT last_value FROM test_seq; -- 1
node2= -- wait for a long time, still doesn't update
node2= SELECT last_value FROM test_seq; -- 1
node1= INSERT INTO t2(a) VALUES('string');
node2= SELECT last_value FROM test_seq; -- 3
node2= -- now it's updated!
So, that looks like a possible bug where a nextval() call doesn't
trigger the replication. But it does appear to replicate after an
unrelated event triggers the replication (in this case an update to t2,
an unrelated table). 

If not, what is the expected behavior of replicated sequences anyway? It
seems you couldn't call nextval() from a slave node, and because of that
you also can't make use of currval(). It looks like the slaves can
really only get the SELECT last_value FROM test_seq. So is there a
particular use case someone had in mind when implementing the SET ADD
SEQUENCE for slonik? 

Regards,
Jeff Davis

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faqs/FAQ.html


[HACKERS] Case preserving - suggestions

2004-06-06 Thread Shachar Shemesh
Hi list,
A postgresql migration I am doing (the same one for which the OLE DB 
driver was written) has finally passed the proof-of-concept stage 
(phew). I now have lots and lots of tidbits, tricks and tips for SQL 
Server migration, which I would love to put online. Is pgFoundry the 
right place? I understand that the code snippets section is not yet 
operative, but I would still love to put it online ASAP (i.e. - before I 
forget), and to have it all in one place.

One problem detected during that stage, however, was that the program 
pretty much relies on the collation being case insensitive. I am now 
trying to gather the info regarding adding case preserving to 
Postgresql. I already suggested that we do that by changing the 
procedures, and the idea was turned down. For example, a column UNIQUE 
constraint must enforce that only one instance of a string be present, 
case insensitive. Then again, making everything lower/upper case before 
putting it in was also rejected. Case preserving is what we are looking for.

Now, one idea that floated through my mind, and I have not yet looked 
into how difficult it would be to implement was to define a new system 
wide collation, called, for example, en_USCI. Have that collation define 
'a' and 'A' as the same character. I'm looking for someone with more 
experience with these things than me (i.e. - just about anyone) to say 
whether such a thing is doable. I know I can reorder sort criteria using 
collation, but can I make two characters be actually the same? As a side 
note, I'll mention that MsSQL uses the collation field to define case 
insensitivity.

Assuming that fails, how hard would it be to create a case insensitive 
PostgreSQL? Would that be more like changing a couple of places (say, 
hash computation and string compares), or would that entail making 
hundreds of little changes all over the code? Is there anything in the 
regression testing infrastructure that can help check such a change?

Many thanks,
Shachar
--
Shachar Shemesh
Lingnu Open Source Consulting
http://www.lingnu.com/
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Slony-I goes BETA (possible bug)

2004-06-06 Thread Jeff Davis
On Sun, 2004-06-06 at 10:32, Jan Wieck wrote:
 You are right. The local slon node checks every -s milliseconds 
 (commandline switch) if the sequence sl_action_seq has changed, and if 
 so generate a SYNC event. Bumping a sequence alone does not cause this, 
 only operations that invoke the log trigger on replicated tables do.
 
 Speaking of this, this would also mean that there is a gap between the 
 last sl_action_seq bumping operation and the commit of that transaction. 
 If the local slon will generate the sync right in that gap, the changes 
 done in that transaction will not be replicated until the next 
 transaction triggers another sync.
 
 I am not sure how to effectively avoid this problem without blindly 
 creating SYNC events in a maybe less frequent interval. Suggestions?


A couple thoughts occur to me:

Spurious SYNCs might not be the end of the world, because if someone is
using replication, they probably don't mind the unneeded costs of a SYNC
when the database is not being used heavily. If it is being used
heavily, the SYNCs will have to happen anyway.

Also, it might be possibly to make use of NOTIFY somehow, because
notifications only occur after a transaction commits. Perhaps you can
issue a notify for each transaction that modifies a replicated table and
slon could listen for that notification? That way, it wouldn't SYNC
before the transaction commits and miss the uncommitted data.

Regards,
Jeff Davis


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [HACKERS] CREATE DATABASE on the heap with PostgreSQL?

2004-06-06 Thread Gaetano Mendola
Albretch wrote:
 After RTFM and googling for this piece of info, I think PostgreSQL
has no such a feature.
 Why not? 

 . Isn't RAM cheap enough nowadays? RAM is indeed so cheap that you
could design diskless combinations of OS + firewall + web servers
entirely running off RAM. Anything needing persistence you will send
to the backend DB then
 . Granted, coding a small Data Structure with the exact functionality
you need will do exactly this keeping the table's data on the heap.
But why doing this if this is what DBMS have been designed for in the
first place? And also, each custom coded DB functionality will have to
be maintaned.
 Is there any way or at least elegant hack to do this?
 I don't see a technically convincing explanation to what could be a
design decision, could you explain to me the rationale behind it, if
any?

If you access a table more frequently then other and you have enough
RAM your OS will mantain that table on RAM, don't you think ?
BTW if you trust on your UPS I'm sure you are able to create a RAM
disk and place that table in RAM.
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] [HACKERS] Slony-I goes BETA

2004-06-06 Thread Rick Gigger
The link you have down there is not the one on the site.  All of the 
links to that file work just fine for me on the live site.

Jan Wieck wrote:
On 6/4/2004 4:47 AM, Karel Zak wrote:
On Fri, Jun 04, 2004 at 01:01:19AM -0400, Jan Wieck wrote:
Yes, Slonik's,
it't true. After nearly a year the Slony-I project is entering the 
BETA phase for the 1.0 release. Please visit

http://gborg.postgresql.org/project/slony1/news/newsfull.php?news_id=174

 Jan, the link
http://postgresql.org/~wieck/slony1/Slony-I-concept.pdf
 that is used on project pages doesn't work :-(
 
Karel

Great ... and there is no way to modify anything on gborg ... this is 
the first and last project I manage on any site where I don't have shell 
access to the content.

Jan
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[HACKERS] CREATE DATABASE on the heap with PostgreSQL?

2004-06-06 Thread Albretch
DBMS like MySQL and hsqldb (the only two I know that can keep and
process tables on the heap) have a CREATE DATABASE case in which the
'database' is specified to reside in memory, that is RAM. For some
data handling cases for which persistence is not important, this is
all you need.

 These types of DBs are very fast and convenient for temporary and
transient 'tables' you don't really need or care to persist, like
session tables for web based applications and temporary tables usually
built in subqueries that might be OK for the next request.

 Some hacks I could think about is running SELECT *  on the needed
table at the start of the DB engine and periodically and -hope- the
operative system keeps it in the cache, but I have the gut feeling,
having such a feature in the DBMS itslef should be faster.

 After RTFM and googling for this piece of info, I think PostgreSQL
has no such a feature.

 Why not? 

 . Isn't RAM cheap enough nowadays? RAM is indeed so cheap that you
could design diskless combinations of OS + firewall + web servers
entirely running off RAM. Anything needing persistence you will send
to the backend DB then

 . Granted, coding a small Data Structure with the exact functionality
you need will do exactly this keeping the table's data on the heap.
But why doing this if this is what DBMS have been designed for in the
first place? And also, each custom coded DB functionality will have to
be maintaned.

 Is there any way or at least elegant hack to do this?

 I don't see a technically convincing explanation to what could be a
design decision, could you explain to me the rationale behind it, if
any?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[HACKERS] Postres dilemma

2004-06-06 Thread Neeraj Sharma
Hi

I am using Postgres 7.3.4 over linux Redhat 7.3 on
i686 machine.

My app has one parent table and five child tables. I
mean the parent table has a primary key and child
tables have foreign key relationship with parent. 
My App is doing 500 inserts initially in each table.
After all this done, we inserting 50 in each table and
deleting previous 50 records every seconds. System
performs well for awhile (30 Hrs). After 30 hrs I
seen that dir size of $PGDATA/base dir is keep on
growing and to goes up to 2G in 48 hrs. App is also
doing vacuum every 45 seconds. Every time vacuum is
triggered, the system goes extreamly slugginsh, and
results in various errors like deadlock
detected(confirmed in the $PGDATA/../LOG/logfile).
vmstat is also showing that blocks sents to the block
device (disk) is going crazy.
I do not know what is the remedy for this problem. If
someone has come across to the issue, please help me
as soon as possible. 
++
NOTE: I can not use Postgres 7.4 and higher releases
beacuse postmaster crashes gauranteed in (20hrs). I
have already reported this bug many times (Bug # 1104
is one of them). All crashes show the same behavior 
and error messages. 
(specified item offset is too large)
++

I appreciate if some one have the solution for my
problem. otherwise it looks like all our app
development done on top of Postgres is going in vain.

Thanking you in advance.

Neeraj K Sharma
email: [EMAIL PROTECTED]
   [EMAIL PROTECTED]

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Advice regarding configuration parameters

2004-06-06 Thread Joe Conway
Thomas Hallgren wrote:
Some very good suggestions where made here. What happens next? Will this end
up in a TODO list where someone can claim the task (I'm trying to learn
how the process works) ?
If someone doesn't jump right on it and make a diff -c proposal, it 
probably belongs on the TODO list. If your need is sufficiently high, 
and you have the time to take it on, then go for it ;-). If not, I might 
someday, but no promises for 7.5.

Joe

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] [GENERAL] Check for prepared statement

2004-06-06 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
 
Fabrizio Mazzoni asked:
 How can i find out if a prepared statement already exists..? Is
 there a function  or a query i can execute ..??
 
Greg replied:
 I have not seen an answer to this, and I am curious as well. Anyone?
 
Alvaro Herrera suggested:
 Trying to prepare a dummy query maybe.
 
Sure, but this creates additional traffic, and causes a transaction
to fail, so it's not really feasible if you are within one. I checked
the pgsql source, and there is no public interface to determine if a
named statement already exists.
 
I do not know why the original poster needed to know, but I've solved
it within DBD::Pg by simply using the fact that statement names are
not shared across connections and having each connection (that is,
ecah database handle) store an integer starting at 1. Generated
statements are simply named dbdpg_1, dbdpg_2, etc. and the
increment is increased only after a statement is created successfully.
There is also support for user-created (and user-named) statements,
with the caveat that it is up to the user, not DBD::Pg, to watch for
name collisions.
 
- --
Greg Sabino Mullane [EMAIL PROTECTED]
PGP Key: 0x14964AC8 200406061855
 
-BEGIN PGP SIGNATURE-
 
iD8DBQFAw6G9vJuQZxSWSsgRAkOPAJ9ioEBN2dhUFNdsKzACdLCzEmmrYwCfZY4P
5qkzlSPmdFwmU3vG14pDSmA=
=hEeT
-END PGP SIGNATURE-



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] serverlog function

2004-06-06 Thread Andreas Pflug
For adminstrator's convenience, I'd like to see a function that returns 
the serverlog.
Are there any security or other issues that should prevent me from 
implementing this?

Regards,
Andreas

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [HACKERS] serverlog function

2004-06-06 Thread Tom Lane
Andreas Pflug [EMAIL PROTECTED] writes:
 For adminstrator's convenience, I'd like to see a function that returns 
 the serverlog.

What do you mean by returns the serverlog?  Are you going to magically
recover data that has gone to stderr or the syslogd daemon?  If so, how?
And why wouldn't you just go and look at the log file, instead?

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] Postres dilemma

2004-06-06 Thread Joshua D. Drake
Hello,
Perhaps you could provide some more detailed information?
Example of queries?
Type of hardware?
Operating system?
Why are you running a vacuum every 45 seconds? Increase your fsm_pages and
run it every hour.
Are you sure the vacuums are trampling eachother and thus getting more 
than one
vacuum running at a time?

J
Neeraj Sharma wrote:
Hi
I am using Postgres 7.3.4 over linux Redhat 7.3 on
i686 machine.
My app has one parent table and five child tables. I
mean the parent table has a primary key and child
tables have foreign key relationship with parent. 
My App is doing 500 inserts initially in each table.
After all this done, we inserting 50 in each table and
deleting previous 50 records every seconds. System
performs well for awhile (30 Hrs). After 30 hrs I
seen that dir size of $PGDATA/base dir is keep on
growing and to goes up to 2G in 48 hrs. App is also
doing vacuum every 45 seconds. Every time vacuum is
triggered, the system goes extreamly slugginsh, and
results in various errors like deadlock
detected(confirmed in the $PGDATA/../LOG/logfile).
vmstat is also showing that blocks sents to the block
device (disk) is going crazy.
I do not know what is the remedy for this problem. If
someone has come across to the issue, please help me
as soon as possible. 
++
NOTE: I can not use Postgres 7.4 and higher releases
beacuse postmaster crashes gauranteed in (20hrs). I
have already reported this bug many times (Bug # 1104
is one of them). All crashes show the same behavior 
and error messages. 
(specified item offset is too large)
++

I appreciate if some one have the solution for my
problem. otherwise it looks like all our app
development done on top of Postgres is going in vain.
Thanking you in advance.
Neeraj K Sharma
email: [EMAIL PROTECTED]
  [EMAIL PROTECTED]
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org
 


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Postres dilemma

2004-06-06 Thread Tom Lane
Joshua D. Drake [EMAIL PROTECTED] writes:
 Why are you running a vacuum every 45 seconds? Increase your fsm_pages and
 run it every hour.

If I understood his description correctly, he's turning over 10% of a
500-row table every minute.  So waiting an hour would mean 3000 dead
rows in a 500-live-row table, which seems excessive.  I'd agree with
running a vacuum on this specific table every five minutes or so.

Given that he is doing more than enough vacuums, I think that the
problem is probably not table bloat, but index bloat (ie, from a
constantly shifting range of live index keys, which pre-7.4 btrees
didn't handle well at all).  This is just speculation though, without
proof as yet.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] CREATE DATABASE on the heap with PostgreSQL?

2004-06-06 Thread jihuang
May Users  forcely assign a table / database / cluster storage in RAM 
purely ?

or a in-directly-way , like  making a RAM-Disk-Device and assign this 
device as a  postgreSQL cluster?

I think this feature will push a lot High-Performance usage ,
any suggestion ?
jihuang
Gaetano Mendola wrote:
Albretch wrote:
 After RTFM and googling for this piece of info, I think PostgreSQL
has no such a feature.
 Why not?
 . Isn't RAM cheap enough nowadays? RAM is indeed so cheap that you
could design diskless combinations of OS + firewall + web servers
entirely running off RAM. Anything needing persistence you will send
to the backend DB then
 . Granted, coding a small Data Structure with the exact functionality
you need will do exactly this keeping the table's data on the heap.
But why doing this if this is what DBMS have been designed for in the
first place? And also, each custom coded DB functionality will have to
be maintaned.
 Is there any way or at least elegant hack to do this?
 I don't see a technically convincing explanation to what could be a
design decision, could you explain to me the rationale behind it, if
any?

If you access a table more frequently then other and you have enough
RAM your OS will mantain that table on RAM, don't you think ?
BTW if you trust on your UPS I'm sure you are able to create a RAM
disk and place that table in RAM.
Regards
Gaetano Mendola


---(end of broadcast)---
TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org