Ow Mun Heng <[EMAIL PROTECTED]> writes:
> I just browsed to my $PGDATA location and noticed that there are some
> tables which has ending of .1
TFM has some useful background knowledge for that sort of thing:
http://www.postgresql.org/docs/8.2/static/storage.html
regards,
On 9/4/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
>
> On Tue, 2007-09-04 at 07:09 +0300, Mikko Partio wrote:
> >
> >
> > On 9/4/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I'm running out of space on one of my partitions and I still
> > have not
> > go
On 9/3/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> I just browsed to my $PGDATA location and noticed that there are some
> tables which has ending of .1
>
> # ls -lahS | egrep '(24694|24702|24926)'
> -rw--- 1 postgres postgres 1.0G Sep 3 22:56 24694
> -rw--- 1 postgres postgres 1.0G Sep
On Tue, 2007-09-04 at 07:09 +0300, Mikko Partio wrote:
>
>
> On 9/4/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm running out of space on one of my partitions and I still
> have not
> gotten all the data loaded yet. I've read that one could
>
On Tue, 2007-09-04 at 05:15 +0100, Gregory Stark wrote:
> "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
>
> > On Mon, 2007-09-03 at 11:31 +0100, Gregory Stark wrote:
> >> "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
> >> >
> >> > How can I persuade PG to use the index w/o resorting to setting seqscan
> >
"Ow Mun Heng" <[EMAIL PROTECTED]> writes:
> On Mon, 2007-09-03 at 11:31 +0100, Gregory Stark wrote:
>> "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
>> >
>> > How can I persuade PG to use the index w/o resorting to setting seqscan
>> > = false
>>
>> The usual knob to fiddle with is random_page_cost.
On 9/4/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'm running out of space on one of my partitions and I still have not
> gotten all the data loaded yet. I've read that one could symlink the
> pg_pg_xlog directory to another drive. I'm wondering if I can do the
> same for specific table
Hi,
I'm running out of space on one of my partitions and I still have not
gotten all the data loaded yet. I've read that one could symlink the
pg_pg_xlog directory to another drive. I'm wondering if I can do the
same for specific tables as well.
Thanks.
I've already done a pg_dump of the entire
I just browsed to my $PGDATA location and noticed that there are some
tables which has ending of .1
# ls -lahS | egrep '(24694|24702|24926)'
-rw--- 1 postgres postgres 1.0G Sep 3 22:56 24694
-rw--- 1 postgres postgres 1.0G Sep 3 22:52 24702
-rw--- 1 postgres postgres 1.0G Sep 3 22:5
On Tue, 2007-09-04 at 10:06 +0800, Ow Mun Heng wrote:
> On Mon, 2007-09-03 at 11:31 +0100, Gregory Stark wrote:
> > "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
> > >-> Bitmap Heap Scan on drv (cost=30.44..4414.39
> > > rows=1291 width=24) (actual time=62.980..142.594 rows=12591 lo
Alvaro Herrera wrote:
Luiz K. Matsumura wrote:
Hi all,
I want to suggest a inclusion of an new function on the pg_catalog.
Since we have set_config(text, text, boolean)
can we have an get_config( text ) ?
Hum, isn't this current_setting()?
Oh oh, you are right, forget my suggest
Luiz K. Matsumura wrote:
> Hi all,
>
> I want to suggest a inclusion of an new function on the pg_catalog.
> Since we have set_config(text, text, boolean)
> can we have an get_config( text ) ?
Hum, isn't this current_setting()?
--
Alvaro Herrera Valdivia, Chile ICBM: S 39º 49' 18.1", W 7
On Mon, 2007-09-03 at 11:31 +0100, Gregory Stark wrote:
> "Ow Mun Heng" <[EMAIL PROTECTED]> writes:
> >
> > How can I persuade PG to use the index w/o resorting to setting seqscan
> > = false
>
> The usual knob to fiddle with is random_page_cost. If your database fits
> mostly in memory you may wa
Hi all,
I want to suggest a inclusion of an new function on the pg_catalog.
Since we have set_config(text, text, boolean)
can we have an get_config( text ) ?
I research and find an internal function that do it, we only don't have
an call for it.
I think that can be like bellow (I don't know if
Henrik <[EMAIL PROTECTED]> writes:
> I'm already started to redesign the database to avoid the hugh number
> of rows in this big table but I'm still curious why autovacuum hogs
> over 200MB when it is not running?
On what do you base that assertion?
> Is it the shared_buffers?
Well, 128M in
"Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> On 04/09/07, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
>> Thats odd, I wonder why the EXPLAIN ANALYZE time id some much less that the
>> logged select
>> statement times?
> Because the statement has been executed and is in the cache.
That answer
On 9/3/07, Rob Kirkbride <[EMAIL PROTECTED]> wrote:
> I am on a Linux platform but I'm going to need some pointers regarding
> the cron job. Are you suggesting that I parse the dump file? I assume I
> would need to switch to using inserts and then parse the dump looking
> for where I need to start
On 04/09/07, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
> --- Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> > LOG: duration: 93473.282 ms statement: select t_info, dstats, id
> > from trades where t_alias = '17huv' and status = 'Y'
> >
> > ---
> >
> > Index Scan using trades
--- Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> LOG: duration: 93473.282 ms statement: select t_info, dstats, id
> from trades where t_alias = '17huv' and status = 'Y'
>
> ---
>
> Index Scan using trades_unique_t_alias on trades (cost=0.00..3.41
> Time: 2.990 ms
Thats o
Hello list,
System is running linux kernel 2.6.18 with postgres 8.2.4 and 1GB ram.
I'm having a 50GB database with the biggest table taking about 30 GB
and has about 200 million rows.
I'm already started to redesign the database to avoid the hugh number
of rows in this big table but I'm st
On 03/09/07, Alban Hertroys <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> As I understand it it's a sample of how the data is distributed.
> Probably it's based on statistical mathematics that specifies a minimum
> size for a representive sample of a given data set. It boils down to:
> "If yo
Phoenix Kiula wrote:
> Lots of posts here in reponse to performance question have the
> recommendation "increase the stats on that column". From whatever
> succint reading is made available on the postgres site, I gather that
> this aids the planner in getting some info about some of the data. Am
>
Phoenix Kiula escribió:
> On 03/09/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
>
> > most_common_vals will (and should) be empty if there aren't actually any
> > common values, but aren't you getting a histogram? Exactly what
> > performance do you thin
On 03/09/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> most_common_vals will (and should) be empty if there aren't actually any
> common values, but aren't you getting a histogram? Exactly what
> performance do you think will be improved?
Lots of posts
Norberto Delle <[EMAIL PROTECTED]> writes:
> 2007-08-20 09:12:09 LOG: archived transaction log file
> "0001000200E7"
> 2007-08-20 09:12:20 LOG: archived transaction log file
> "0001000200E8"
> 2007-08-20 09:12:21 LOG: could not receive data from client: Unknown
> winso
"Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> A couple of questions about the "most_common_vals" stuff in pg_stats
> for a high traffic table:
> 1. Can I tell the stats collector to collect only values of a column
> where a certain regex is matched?
Not directly, but you could set up a partial in
Tom Lane writes:
Norberto Delle <[EMAIL PROTECTED]> writes:
I have a PostgreSQL 8.2.4 installation running under Windows XP with WAL
archiving activated.
But at some point Postgres began to ask to archive a WAL segment that
isn't in the pg_xlog directory. I thought that a segment that isn't
suc
tkdchen wrote:
>I install PostgreSQL under Windows Vista, but the MSI failed. It told
>me that it has no right to create postgres user account. I don't know
>why. I just run the MSI package with an Administrator account. Please
>help me to solve this problem.
Sounds like the UAC (User Access Cont
Hi,
it is my first shot using Mono and I failed to get the example from
http://www.mono-project.de/wiki/keyword/PostgreSQL/
working. The reason is obviousely that whatever I tried NpgsqlConnection
tries to use password authentication but I have configured my system
that ident authentication
A couple of questions about the "most_common_vals" stuff in pg_stats
for a high traffic table:
1. Can I tell the stats collector to collect only values of a column
where a certain regex is matched? It is currently collecting the 500
values where most of them are values that I don't want, so it's
p
Hi,
I install PostgreSQL under Windows Vista, but the MSI failed. It told
me that it has no right to create postgres user account. I don't know
why. I just run the MSI package with an Administrator account. Please
help me to solve this problem.
--
GoogleTalk: [EMAIL PROTECTED]
MSN: [EMAIL PROTEC
- Original Message -
From: "Albe Laurenz" <[EMAIL PROTECTED]>
To: "Ashish Karalkar *EXTERN*" <[EMAIL PROTECTED]>
Cc:
Sent: Monday, September 03, 2007 4:54 PM
Subject: RE: [GENERAL] invalid byte sequence for encoding "UTF8": 0xff
Ashish Karalkar wrote:
I have a data script which runs
Ashish Karalkar wrote:
>>> I have a data script which runs fine from PgAdmin SQL
>>> Editor,but when I run this from command prompt I get
>>> following error:
>>>
>>> test=# \i /usr/local/pgsql/qsweb1/QSWEB_100_4_Default_Data.sql
>>>
>>> psql:/usr/local/pgsql/qsweb1/QSWEB_100_4_Default_Data.sql:1
- Original Message -
From: "Ashish Karalkar" <[EMAIL PROTECTED]>
To: "Albe Laurenz" <[EMAIL PROTECTED]>
Sent: Monday, September 03, 2007 4:09 PM
Subject: Re: [GENERAL] invalid byte sequence for encoding "UTF8": 0xff
- Original Message -
From: "Albe Laurenz" <[EMAIL PROTECTE
"Ow Mun Heng" <[EMAIL PROTECTED]> writes:
> Same query, executed twice, once using seqscan enabled and the other
> with it disabled. Difference is nearly night and day.
>
>
> How can I persuade PG to use the index w/o resorting to setting seqscan
> = false
The usual knob to fiddle with is random_
Andrej Ricnik-Bay wrote:
On 9/3/07, Rob Kirkbride <[EMAIL PROTECTED]> wrote:
We're using hibernate to write to the database. Partitioning looks like it
will be too much of a re-architecture. In reply to Andrej we do have a
logged_time entity in the required tables. That being the case how do
On 9/3/07, Rob Kirkbride <[EMAIL PROTECTED]> wrote:
> We're using hibernate to write to the database. Partitioning looks like it
> will be too much of a re-architecture. In reply to Andrej we do have a
> logged_time entity in the required tables. That being the case how does that
> help me with th
On 03/09/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>
> On 9/3/07, Rob Kirkbride <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I've got a postgres database collected logged data. This data I have to
> keep
> > for at least 3 years. The data in the first instance is being recorded
> in a
> > postgres
Ashish Karalkar wrote:
> I have a data script which runs fine from PgAdmin SQL
> Editor,but when I run this from command prompt I get
> following error:
>
>
> test=# \i /usr/local/pgsql/qsweb1/QSWEB_100_4_Default_Data.sql
>
> psql:/usr/local/pgsql/qsweb1/QSWEB_100_4_Default_Data.sql:1:
>
Same query, executed twice, once using seqscan enabled and the other
with it disabled. Difference is nearly night and day.
How can I persuade PG to use the index w/o resorting to setting seqscan
= false
(actually, I don't know what are the pro or cons - I read posts from the
archives far back as
On Mon, Sep 03, 2007 at 01:36:58PM +0530, Ashish Karalkar wrote:
> Hello All,
>
> I have a data script which runs fine from PgAdmin SQL Editor,but when I run
> this from command prompt I get following error:
> test=# \i /usr/local/pgsql/qsweb1/QSWEB_100_4_Default_Data.sql
>
> psql:/usr/local/p
Hello All,
I have a data script which runs fine from PgAdmin SQL Editor,but when I run
this from command prompt I get following error:
test=# \i /usr/local/pgsql/qsweb1/QSWEB_100_4_Default_Data.sql
psql:/usr/local/pgsql/qsweb1/QSWEB_100_4_Default_Data.sql:1: ERROR: invalid byt
e sequence fo
On 9/3/07, Rob Kirkbride <[EMAIL PROTECTED]> wrote:
> So basically I need a dump/restore that only appends new
> data to the reports server database.
I guess that will all depend on whether or not your
data has a record of the time it got stuck in the cluster
or not ... if there's no concept of a
On 9/3/07, Rob Kirkbride <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I've got a postgres database collected logged data. This data I have to keep
> for at least 3 years. The data in the first instance is being recorded in a
> postgres cluster. This then needs to be moved a reports database server for
> an
Hi,
I've got a postgres database collected logged data. This data I have to keep
for at least 3 years. The data in the first instance is being recorded in a
postgres cluster. This then needs to be moved a reports database server for
analysis. Therefore I'd like a job to dump data on the cluster sa
45 matches
Mail list logo