I've tried to use Dan Tow's tuning method and
created all the right indexes from his diagraming method, but the query still
performs quite slow both inside the application and just inside pgadmin
III. Can anyone be kind enough to help me tune it so that it performs
better in postgres? I do
I would like to know whether there is any command which the server will give the
record ID back to the client when client puts the data and the server generates
an autoincrement ID for that record.
For example if many clients try to put the money data to the server and each
record from each client
Ramon Bastiaans schrieb:
My currect estimate is that I have to store (somehow)
around 1 billion values each month (possibly more).
You should post the actual number or power of ten,
since "billion" is not always interpreted the same way...
rgds
thomas
---(end of broadcast)-
Ramon Bastiaans wrote:
I am doing research for a project of mine where I need to store
several billion values for a monitoring and historical tracking system
for a big computer system. My currect estimate is that I have to store
(somehow) around 1 billion values each month (possibly more).
I wa
Dan Harris <[EMAIL PROTECTED]> writes:
> query that uses LIKE. In my research I have read that the locale
> setting may affect PostgreSQL's choice of seq scan vs index scan.
Non-C-locale indexes can't support LIKE because the sort ordering
isn't necessarily right.
> I am running Fedora Core 2 a
Ramon,
> What would be important issues when setting up a database this big, and
> is it at all doable? Or would it be a insane to think about storing up
> to 5-10 billion rows in a postgres database.
What's your budget?You're not going to do this on a Dell 2650. Do you
have the kind of a
Sven Willenberger wrote:
> Trying to determine the best overall approach for the following
> scenario:
>
> Each month our primary table accumulates some 30 million rows (which
> could very well hit 60+ million rows per month by year's end). Basically
> there will end up being a lot of historical d
Greetings,
I have been beating myself up today trying to optimize indices for a
query that uses LIKE. In my research I have read that the locale
setting may affect PostgreSQL's choice of seq scan vs index scan. I am
running Fedora Core 2 and it appears when I run "locale" that it is set
to 'e
Markus Schaber wrote:
Hi, John,
John Arbash Meinel schrieb:
I am doing research for a project of mine where I need to store
several billion values for a monitoring and historical tracking system
for a big computer system. My currect estimate is that I have to store
(somehow) around 1 billion value
Sven Willenberger wrote:
On Tue, 2005-03-01 at 09:48 -0600, John Arbash Meinel wrote:
Sven Willenberger wrote:
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per mo
On Tue, 2005-03-01 at 09:48 -0600, John Arbash Meinel wrote:
> Sven Willenberger wrote:
>
> >Trying to determine the best overall approach for the following
> >scenario:
> >
> >Each month our primary table accumulates some 30 million rows (which
> >could very well hit 60+ million rows per month by
Hi, John,
John Arbash Meinel schrieb:
>> I am doing research for a project of mine where I need to store
>> several billion values for a monitoring and historical tracking system
>> for a big computer system. My currect estimate is that I have to store
>> (somehow) around 1 billion values each mo
Isn't that 385 rows/second. Presumably one can insert more than one
row in a transaction?
-- Alan
Vig, Sandor (G/FI-2) wrote:
385 transaction/sec?
fsync = false
risky but fast.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of John Arbash
Meinel
Sent: Tues
What do your "values" consist of?
Would it be possible to group several hundred or thousand of them into a
single row somehow that still makes it possible for your queries to get at
them efficiently?
What kind of queries will you want to run against the data?
For example if you have a measurem
Vig, Sandor (G/FI-2) wrote:
385 transaction/sec?
fsync = false
risky but fast.
I think with a dedicated RAID10 for pg_xlog (or possibly a battery
backed up ramdisk), and then a good amount of disks in a bulk RAID10 or
possibly a good partitioning of the db across multiple raids, you could
probably
385 transaction/sec?
fsync = false
risky but fast.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of John Arbash
Meinel
Sent: Tuesday, March 01, 2005 4:19 PM
To: Ramon Bastiaans
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] multi billion row ta
Ramon Bastiaans wrote:
Hi all,
I am doing research for a project of mine where I need to store
several billion values for a monitoring and historical tracking system
for a big computer system. My currect estimate is that I have to store
(somehow) around 1 billion values each month (possibly more).
Hi, Ramon,
Ramon Bastiaans schrieb:
> The database's performance is important. There would be no use in
> storing the data if a query will take ages. Query's should be quite fast
> if possible.
Which kind of query do you want to run?
Queries that involve only a few rows should stay quite fast w
On Mar 1, 2005, at 4:34 AM, Ramon Bastiaans wrote:
What would be important issues when setting up a database this big,
and is it at all doable? Or would it be a insane to think about
storing up to 5-10 billion rows in a postgres database.
Buy a bunch of disks.
And then go out and buy more disks.
Hi all,
I am doing research for a project of mine where I need to store several
billion values for a monitoring and historical tracking system for a big
computer system. My currect estimate is that I have to store (somehow)
around 1 billion values each month (possibly more).
I was wondering if
20 matches
Mail list logo