=?UTF-8?B?Q2hyaXN0aWFuIFNjaHLDtmRlcg==?= <[EMAIL PROTECTED]> writes:
> hubert depesz lubaczewski wrote:
>> On Wed, Aug 22, 2007 at 07:07:20PM +0200, Christian Schröder wrote:
>>> These are the current settings from the server configuration:
>>> shared_buffers = 3GB
>>
>> this is *way* to much. i
hubert depesz lubaczewski wrote:
On Wed, Aug 22, 2007 at 07:07:20PM +0200, Christian Schröder wrote:
These are the current settings from the server configuration:
shared_buffers = 3GB
this is *way* to much. i would suggest lowering it to 1gig *at most*.
Ok, I can do this, but wh
On Thu, 23 Aug 2007, Dmitry Koterov wrote:
Oh! Thanks!
delete from pg_ts_cfgmap where dict_name = ARRAY['ru_stem'];
solves the root of the problem. But unfortunately
russian.med(ru_ispell_cp1251) contains all Russian names, so "Ivanov"
is converted to
"Ivan" by ispell too. :-(
Now
select lex
Check it out here:
http://www.amsoftwaredesign.com/pg_vs_fb
When comparing in the grid the only major advantage FB has is probably
BLOB support.
PG only suppports 1 gb while FB supports 32gb. Bytea is pretty slow as
well when compared to the FB BLOB support.
The other area is Character set
"Max Zorloff" <[EMAIL PROTECTED]> writes:
> ... The problem is that after the number of concurrent users rises to
> 100, CPU becomes almost 100% loaded. How do I find out what's hogging the
> CPU?
> 'top' shows demon using 8% cpu on top, and some amount of postgres
> processes each using 2%
"Susy Ham" <[EMAIL PROTECTED]> writes:
> We are trying to perform a 'reindex database' and it will fail at
> varying points with a message like:
> ERROR: could not open relation with OID 587495058
>or
> ERROR: could not open relation with OID 58
On Aug 22, 2007, at 21:42 , Ged wrote:
I've been looking to see if postgresql allows type definitions like
the ones I'm used to from Oracle, where I can say:
CREATE TYPE npc_page_details_type AS
(
...
instance_name instances.name%type,
...
);
and it will pick up the current type o
On Wed, 22 Aug 2007, Dmitry Koterov wrote:
We are trying to use HP CISS contoller (Smart Array E200i)
There have been multiple reports of problems with general performance
issues specifically with the cciss Linux driver for other HP cards. The
E200i isn't from the same series, but I wouldn'
Jeff Amiel <[EMAIL PROTECTED]> writes:
> Even more odd is that a LOCAL pg_dump (from on the
> box) succeeded just fine tonight (after the second
> crash).
That seems to eliminate the theory of a crash due to data corruption
... unless the corruption somehow repaired itself in the intervening
30 mi
>From the logs tonight when the second crash occurred..
Aug 22 20:45:12 db-1 postgres[5805]: [ID 748848
local0.info] [6-1] 2007-08-22 20:45:12 CDT LOG:
received smart shutdown request
Aug 22 20:45:12 db-1 postgres[5805]: [ID 748848
local0.info] [7-1] 2007-08-22 20:45:12 CDT LOG:
server proce
On Wed, 22 Aug 2007, Ron Johnson wrote:
That seems to be a very extreme ratio. Most databases do *many*
times more reads than writes.
Yes, but the OS has a lot more memory to cache the reads for you, so you
should be relying more heavily on it in cases like this where the card has
a relativ
Fairly new (less than a week) install.
"PostgreSQL 8.2.4 on i386-pc-solaris2.10, compiled by
GCC gcc (GCC) 3.4.3 (csl-sol210-3_4-branch+sol_rpath)"
database size around 43 gigabytes.
2 attempts at a pg_dump across the network caused the
database to go down...
The first time I thought it was beca
Checking out a bug in my code I find in my table definition:
CREATE TABLE publications
(
...
name character varying(60) NOT NULL,
...
);
while in the function I use to populate the web page, which
amalgamates info from several sources, I have:
CREATE TYPE npc_page_details_type AS
(
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
John DeSoi wrote:
>
> On Aug 22, 2007, at 7:21 PM, Geoffrey wrote:
>
>> We are looking for an open source reporting tool that will enable
>> users to generate their own reports. Something like Crystal Reports. ;)
>
> I was looking at a couple the o
On Aug 21, 2007, at 14:35 , Josh Trutwin wrote:
it's a simple example but the number of list-type
fields is anticipated to be VERY high so I decided to deal with
arrays instead of creating all sorts of normalized tables.
Was it the right thing to do?
No offense, but I doubt it. This doesn't s
On Aug 22, 2007, at 7:21 PM, Geoffrey wrote:
We are looking for an open source reporting tool that will enable
users to generate their own reports. Something like Crystal
Reports. ;)
I was looking at a couple the other day: iReport (part of Jasper),
OpenRPT, and DataVision (http://datav
Bill Thoen <[EMAIL PROTECTED]> writes:
> As you requested, here's what bt in gbd reports:
> (gdb) bt
> #0 0x003054264571 in fputc () from /lib64/libc.so.6
> #1 0x0040dbd2 in print_aligned_text ()
> #2 0x0040f10b in printTable ()
> #3 0x0041020b in printQuery ()
> #4
On 8/22/07, Dmitry Koterov <[EMAIL PROTECTED]> wrote:
> Also, the controller is configured to use 75% of its memory for write
> caching and 25% - for read caching. So reads cannot flood writes.
128 Meg is a pretty small cache for a modern RAID controller. I
wonder if this one is just a dog perfor
On Aug 22, 2007, at 1:02 , Frank Millman wrote:
I want to store data in a 'tree' form, with a fixed number of
levels, so
that each level has a defined role.
First thought: fixed, predetermined levels, separate tables for each
level. If a more general approach is desired, your options are
On 8/22/07, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Geoffrey wrote:
> > We are looking for a reporting tool that will enable users to generate
> > their own reports. Something like Crystal Reports.
> >
> > Anyone using something like this wi
Joshua D. Drake wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Geoffrey wrote:
We are looking for a reporting tool that will enable users to generate
their own reports. Something like Crystal Reports.
Anyone using something like this with Postgresql?
Why not Crystal Reports?
My bad
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 08/22/07 17:45, Dmitry Koterov wrote:
> Also, the controller is configured to use 75% of its memory for write
> caching and 25% - for read caching. So reads cannot flood writes.
That seems to be a very extreme ratio. Most databases do *many*
times
Hello.
I have a web-server with php 5.2 connected to postgres 8.0 backend. Most
of the queries the users are doing are SELECTs (100-150 in a second for
100 concurrent users), with a 5-10 INSERTs/UPDATEs at the same time. There
is also a demon running in the background doing some work once e
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Geoffrey wrote:
> We are looking for a reporting tool that will enable users to generate
> their own reports. Something like Crystal Reports.
>
> Anyone using something like this with Postgresql?
>
Why not Crystal Reports?
Joshua D. Drake
- --
We are trying to perform a 'reindex database' and it will fail at
varying points with a message like:
ERROR: could not open relation with OID 587495058
or
ERROR: could not open relation with OID 587603875
When we queried pg_class we got no rows
We are looking for a reporting tool that will enable users to generate
their own reports. Something like Crystal Reports.
Anyone using something like this with Postgresql?
--
Until later, Geoffrey
Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neithe
Also, the controller is configured to use 75% of its memory for write
caching and 25% - for read caching. So reads cannot flood writes.
On 8/23/07, Dmitry Koterov <[EMAIL PROTECTED]> wrote:
>
> > I have written a small perl script to check how slow is fsync for Smart
> > > Array E200i controller.
>
> > I have written a small perl script to check how slow is fsync for Smart
> > Array E200i controller. Theoretically, because of write cache, fsync
> MUST
> > cost nothing, but in practice it is not true
>
> That theory is fundamentally flawed; you don't know what else is in the
> operating syst
Oh! Thanks!
delete from pg_ts_cfgmap where dict_name = ARRAY['ru_stem'];
solves the root of the problem. But unfortunately
russian.med(ru_ispell_cp1251) contains all Russian names, so "Ivanov"
is converted to
"Ivan" by ispell too. :-(
Now
select lexize('ru_ispell_cp1251', 'Дмитриев') -> "Дмитри
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Matthew wrote:
> Hey all, new postgres user here. We are trying to setup/research an
> HA/Replicated solution with Postrgresql between a datacenter in LA and a
> d.c. in NY.
>
> We have a private LAN link between the two D.C.'s with a max round-trip
>
Hey all, new postgres user here. We are trying to setup/research an
HA/Replicated solution with Postrgresql between a datacenter in LA and a
d.c. in NY.
We have a private LAN link between the two D.C.'s with a max round-trip
of 150ms.
We will have a web server at each d.c. (among other servers) t
On Wed, 22 Aug 2007, Dmitry Koterov wrote:
I have written a small perl script to check how slow is fsync for Smart
Array E200i controller. Theoretically, because of write cache, fsync MUST
cost nothing, but in practice it is not true
That theory is fundamentally flawed; you don't know what els
10 days is not suspicious at all if you need to pull out text for
indexing using complex logic and/or schema (i.e. most of the time you
retrieve text, not index it). Example: you index some tree leaves
(i.e. table with 3 columns: id, parent_id and name) and want to have
redundant text index. You th
Hey Ted, thanks for the reply,
In respect of web application architecture, I'm fully aware of session
persistence mechanisms (I work as a developer of web apps after all).
What I really would like to know is the inner-workings of the set_session_id
and current_session_id as well as reset_session_
Murali Maddali wrote:
This is what I am doing, I am reading the data from SQL Server 2005 and
dumping to out to Postgresql 8.2 database.
while (r.Read())
_save(r, srcTblSchema, destTbl, destConn);
r.Close();
As you requested, here's what bt in gbd reports:
(gdb) bt
#0 0x003054264571 in fputc () from /lib64/libc.so.6
#1 0x0040dbd2 in print_aligned_text ()
#2 0x0040f10b in printTable ()
#3 0x0041020b in printQuery ()
#4 0x00407906 in SendQuery ()
#5 0x00
On Wed, 22 Aug 2007, Dmitry Koterov wrote:
Suppose I cannot add such synonyms, because:
1. There are a lot of surnames, cannot take care about all of them.
2. After adding a new surname I have to re-calculate all full-text indices,
it costs too much (about 10 days to complete the recalculation)
Suppose I cannot add such synonyms, because:
1. There are a lot of surnames, cannot take care about all of them.
2. After adding a new surname I have to re-calculate all full-text indices,
it costs too much (about 10 days to complete the recalculation).
So, I neet exactly what I ast - switch OFF
On Wed, 22 Aug 2007, Dmitry Koterov wrote:
Hello.
We use ispell dictionaries for tsearch2 (ru_ispell_cp1251)..
Now Snowball stemmer is also configured.
How to properly switch OFF Snowball stemmer for Russian without turning off
ispell stemmer? (It is really needed, because "Ivanov" is not the
Hello.
We use ispell dictionaries for tsearch2 (ru_ispell_cp1251)..
Now Snowball stemmer is also configured.
How to properly switch OFF Snowball stemmer for Russian without turning off
ispell stemmer? (It is really needed, because "Ivanov" is not the same as
"Ivan".)
Is it enough and correct to s
This script is here:
postgresql-8.2.3\src\tools\fsync\test_fsync.c
On 8/22/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
>
> On 23/08/07, Dmitry Koterov <[EMAIL PROTECTED]> wrote:
> > And here are results of built-in Postgres test script:
> >
>
>
>
> Can you tell me how I can execute thi
Hello Group,
I have asked this question already on the NpgSql forum, but didn't get a
response so far. Sorry for cross posting, but want to check if any one had
any suggestions for my problem.
I am trying to do my updates through NpgsqlDataAdapter (I also tried with
Odbc driver with no luck) by
Hi,
On 23/08/07, Dmitry Koterov <[EMAIL PROTECTED]> wrote:
> And here are results of built-in Postgres test script:
>
Can you tell me how I can execute this script on my system? Where is
this script?
Thanks!
---(end of broadcast)---
TIP 3: Have
And here are results of built-in Postgres test script:
Simple write timing:
write0.006355
Compare fsync times on write() and non-write() descriptor:
(If the times are similar, fsync() can sync data written
on a different descriptor.)
write, fsync, close 0
On Wed, Aug 22, 2007 at 07:07:20PM +0200, Christian Schröder wrote:
> These are the current settings from the server configuration:
>shared_buffers = 3GB
this is *way* to much. i would suggest lowering it to 1gig *at most*.
>max memory size (kbytes, -m) 3441565
this looks like to
Hi list,
I am struggling with some "out of memory" errors in our PostgreSQL
database which I do not understand. Perhaps someone can give me a hint.
The application which causes the errors runs multi-threaded with 10
threads. Each of the threads performs several select statements on the
database
On Tuesday 21 August 2007 11:40 pm, Tom Lane wrote:
> "Ed L." <[EMAIL PROTECTED]> writes:
> > Are there
> > are any known or obvious gotchas associated with
> > transforming a unique index on a non null column into a
> > primary key via this sql?
> >
> > update pg_index
> > set indisprimary = 't'
>
On 8/22/07, Dmitry Koterov <[EMAIL PROTECTED]> wrote:
> Hello.
> You see, 50M block was fsynced for 0.25 s.
>
> The question is: how to solve this problem and make fsync run with no delay.
> Seems to me that controller's internal write cache is not used (strange,
> because all configuration options
Hello.
We are trying to use HP CISS contoller (Smart Array E200i) with internal
cache memory (100M for write caching, built-in power battery) together with
Postgres. Typically under a heavy load Postgres runs checkpoint fsync very
slow:
checkpoint buffers dirty=16.8 MB (3.3%) write=24.3 ms sync=6
http://svr5.postgresql.org/pgsql-hackers/2006-03/msg01198.php
http://www.mail-archive.com/pgsql-general@postgresql.org/msg90845.html
This is related to an autovacuum bug and the freeze logic with
template0and probably a corrupted pg_statistics table in template0
you should upgrade AND re
Thanks for your replies so far, it seems like the problem has been solved,
for now!
> On 8/22/07, Patrick Lindeman <[EMAIL PROTECTED]> wrote:
>> Hi,
>>
>> While checking the logfiles of our postgresql server I noticed the error
>> mentioned in the subject:
>>
>> ERROR: could not access status of
Bill Thoen <[EMAIL PROTECTED]> writes:
> My PostgreSQL is working great for small SQL queries even from my large
> table (18 million records). But when I ask it to retrieve anything that
> takes it more than 10 minutes to assemble, it crashes with this
> "Segmentation Fault" error. I get so little
On 8/22/07, Patrick Lindeman <[EMAIL PROTECTED]> wrote:
> Hi,
>
> While checking the logfiles of our postgresql server I noticed the error
> mentioned in the subject:
>
> ERROR: could not access status of transaction 3221180546
> DETAIL: could not open file "pg_clog/0BFF": No such file or directo
Martijn van Oosterhout escribió:
> That said, it would be nice if it returned an error instead of
> crashing.
In my opinion it isn't just a matter of "would be nice". It is a
possible bug that should be investigated.
A look at a stack trace from the crashing process would be the first
place to
On Wed, Aug 22, 2007 at 07:09:22AM -0600, Bill Thoen wrote:
> PostgreSQL Version is 8.1.5, running on Linux (Fedora Core 6). The last few
> lines in the Serverlog are:
> LOG: unexpected EOF on client connection
> LOG: transaction ID wrap limit is 1073746500, limited by database
> "postgres"
> LOG
On Wed, Aug 22, 2007 at 09:46:21AM +1200, Andrej Ricnik-Bay wrote:
> On 8/22/07, Bill Thoen <[EMAIL PROTECTED]> wrote:
> > How would you suggest I try to track down this problem?
> > Any suggestions?
> postgres version?
> Operating system?
> Anything in the log(s)?
PostgreSQL Version is 8.1.5, run
Hi,
While checking the logfiles of our postgresql server I noticed the error
mentioned in the subject:
ERROR: could not access status of transaction 3221180546
DETAIL: could not open file "pg_clog/0BFF": No such file or directory
Searching on google told me that this could be solved by creatin
On Tue, Aug 21, 2007 at 04:38:42PM -0500, Scott Marlowe wrote:
> On 8/21/07, Bill Thoen <[EMAIL PROTECTED]> wrote:
> > How would you suggest I try to track down this problem?
> > I run the following query:
> >
> > SELECT a.* FROM compliance_2006 a, ers_regions b
> > WHERE a.fips_st_cd=b.fips_st
>
novice wrote:
I am trying to record the following entries into a table. I'm curious
to know if there's an efficient/effective way of doing this? This
entries come from an ancient datalogger (note: separated by space and
uses YY/MM/DD format to record date)
Plain file sample.dat
3665 OK
Hi
Just a hint.
We do this auto-partitioning with PgQ. Instead of writing records into table
we push them into queue and use consumer called table_dispatcher to creates
tartitioned tables as needed and put records into them. We have multiple
destination databases where to write data and target tab
60 matches
Mail list logo