Hi,
Does anyone know postgres performance in Linux vs Mac
Os.. Pls suggest the best platform to go for..
Thanks in Advance.
Priya
__
Do you Yahoo!?
Yahoo! Small Business $15K Web Design Giveaway
http://promotions.yahoo.com/design_giveaway/
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hemapriya wrote:
| Hi,
|
| Does anyone know postgres performance in Linux vs Mac
| Os.. Pls suggest the best platform to go for..
|
| Thanks in Advance.
|
| Priya
|
|
Well, my 2c is on how much money you're willing to spend (i.e. a Mac is more
On Thu, 1 Apr 2004, Hemapriya wrote:
Hi,
Does anyone know postgres performance in Linux vs Mac
Os.. Pls suggest the best platform to go for..
Generally, linux on X86, dollar for dollar, offers better performance than
the Mac. Even a relatively cheap single CPU ~2GHz machine with 1 gig ram
Greetings,It seems that stored procedures that
use SETOF are slower than regular sqlcommands. Why does it
happens?Please check out the following example.bxs=# \d
cham_chamadaTable "public.cham_chamada"Column | Type |
I have a database that will hold massive amounts of scientific data.
Potentially, some estimates are that we could get into needing
Petabytes (1,000 Terabytes) of storage.
1. Do off-the-shelf servers exist that will do Petabyte storage?
2. Is it possible for PostgreSQL to segment a database
Not really answering the question but I thought I would post this anyway
as it may be of interest.
If you want to have some fun (depending on how production-level the
system needs to be) you can build this level of storage using Linux
clusters and cheap IDE drives. No April foo's joke! I have
let alone the storate limit of 2GB per
table. So sadly, PG would have to bow out of this IMHO unless someone
else nukes me on this!
I just checked the PostgreSQL website and it says that tables are limited to
16 TB not 2 GB.
-Tony
---(end of
- Original Message -
From: Bradley Kieser [EMAIL PROTECTED]
To: Tony Reina [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, April 01, 2004 8:53 PM
Subject: Re: [ADMIN] Do Petabyte storage solutions exist?
let alone the storate limit of 2GB per
table. So sadly, PG would have to
Yeah, move on over to Oracle. Even on older versions the file limit may have been
2GB, but a tablespace could have more than one datafile. The true limit there is
4194303 blocks where a block can be 2KB, 4KB, 8KB, 16KB, 32KB, 64KB and with 10G comes
128KB. Then each table/index can have
AFAIK Postgres uses an internal limit of 2 GB per table file with
a lot of files per table to make up some Terabytes. So don't worry!
Let's see what one of the gurus will tell us. Bye.
-Ursprüngliche Nachricht-
Von: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Auftrag von Tony and Bryn
On Thu, 1 Apr 2004, Bradley Kieser wrote:
I think as far as PG storage goes you're really on a losing streak here
because PG clustering really isn't going to support this across multiple
servers. We're not even close to the mark as far as clustered servers
and replication management goes,
On Thu, 1 Apr 2004, Tony and Bryn Reina wrote:
let alone the storate limit of 2GB per
table. So sadly, PG would have to bow out of this IMHO unless someone
else nukes me on this!
I just checked the PostgreSQL website and it says that tables are limited to
16 TB not 2 GB.
Actually,
Can anyone recommend an editor (windows OR linux) for writing plpgsql
code, that might be friendlier than a standard text editor?
Nice features I can think of might be:
- smart tabbing (1 tab = N spaces)
- code coloring (esp. quoted strings!)
- parens/brackets matching
Thanks,
Andrew
Eduardo Naschenweng [EMAIL PROTECTED] writes:
bxs=3D# EXPLAIN ANALYZE SELECT dt_inicial, identidadea FROM cham_chamada cc=
;=0D
[ is faster than ]
bxs=3D# EXPLAIN ANALYZE SELECT * FROM teste();=0D
nodeFunctionscan.c insists on cramming the results of the function into
a tuplestore and then
14 matches
Mail list logo