Roberts, Jon wrote:
He's talking about having the raw database files on a file server (eg
SMB share). DB's like firebird and sqlite can handle this way of
accessing the data using the embedded engines.
Active-active, multiple server databases are either a shared nothing or
a shared disk system
Yes, we are copying from pg_xlog. By doing so we let the WAL-segments fill up
(not using timeout) and we are able to recover within a 10 minute interval.
Could it be that this copy operation is causing the problem?
Per
-Original Message-
From: Magnus Hagander [mailto:[EMAIL PROTECTED]
On Tue, 3 Jun 2008, "Roberts, Jon" <[EMAIL PROTECTED]> writes:
> PostgreSQL does not have either a shared disk or shared nothing
> architecture.
But there are some turn arounds for these obstacles:
- Using pgpool[1], sequoia[2], or similar tools[3] you can simulate a
"shared nothing" architectu
On Tuesday 03 June 2008 20:10, Steve Crawford wrote:
> Terry Lee Tucker wrote:
> > Greetings:
> >
> > I was wondering if anyone knows of a third party product that will
> > generate SQL statements for creating existing tables. We have to provide
> > table definition statements for out parent compan
Scott Marlowe wrote:
> On Tue, Jun 3, 2008 at 9:58 PM, Albretch Mueller
> <[EMAIL PROTECTED]> wrote:
> > On Tue, Jun 3, 2008 at 11:03 PM, Oliver Jowett
> <[EMAIL PROTECTED]> wrote:
> >> That's essentially the same as the COPY you quoted in your
> original email,
> >> isn't it? So.. what exactly
On Tue, Jun 3, 2008 at 9:58 PM, Albretch Mueller <[EMAIL PROTECTED]> wrote:
> On Tue, Jun 3, 2008 at 11:03 PM, Oliver Jowett <[EMAIL PROTECTED]> wrote:
>> That's essentially the same as the COPY you quoted in your original email,
>> isn't it? So.. what exactly is it you want to do that COPY doesn't
Albretch Mueller wrote:
On Tue, Jun 3, 2008 at 11:03 PM, Oliver Jowett <[EMAIL PROTECTED]> wrote:
That's essentially the same as the COPY you quoted in your original email,
isn't it? So.. what exactly is it you want to do that COPY doesn't do?
~
well, actually, not exactly; based on:
~
http:/
On Tue, Jun 3, 2008 at 11:03 PM, Oliver Jowett <[EMAIL PROTECTED]> wrote:
> That's essentially the same as the COPY you quoted in your original email,
> isn't it? So.. what exactly is it you want to do that COPY doesn't do?
~
well, actually, not exactly; based on:
~
http://postgresql.com.cn/docs/
> Justin wrote:
> >
> >
> > aravind chandu wrote:
> > Hi,
> > >> My question is
> > >>Microsoft sql server 2005 cannot be shared on multiple systems
> > i,e in a network environment when it is installed in one system it
> > cannot be accessed one other systems.
> >
> >
> > This don't make any
Justin wrote:
aravind chandu wrote:
Hi,
>> My question is
>>Microsoft sql server 2005 cannot be shared on multiple systems
i,e in a network environment when it is installed in one system it
cannot be accessed one other systems.
This don't make any sense. Are your taking about sharing
Terry Lee Tucker wrote:
Greetings:
I was wondering if anyone knows of a third party product that will generate
SQL statements for creating existing tables. We have to provide table
definition statements for out parent company. Any ideas?
Why 3rd party? How about:
pg_dump --schema-only -t
Greetings:
I was wondering if anyone knows of a third party product that will generate
SQL statements for creating existing tables. We have to provide table
definition statements for out parent company. Any ideas?
--
Terry Lee Tucker
Turbo's IT Manager
Turbo, division of Ozburn-Hessey Logistics
Henrik wrote:
Hi list,
I'm having a table with a lots of file names in it. (Aprox 3 million) in
a 8.3.1 db.
Doing this simple query shows that the statistics is way of but I can
get them right even when I raise the statistics to 1000.
db=# alter table tbl_file alter file_name set statistic
aravind chandu wrote:
Hi,
>> My question is
>>Microsoft sql server 2005 cannot be shared on multiple systems
i,e in a network environment when it is installed in one system it
cannot be accessed one other systems.
This don't make any sense. Are your taking about sharing the actual mdb
Excuse me, but maybe I'm misunderstanding your statements and questions
here?
MS SQL Server most certainly 'can be' accessed from a network, three ways
immediately come to mind:
- isql command line
- osql command line
- PERL using DBI interface
ODBC Drivers help in some configuration scenario
> Microsoft
> sql server 2005
> cannot be shared on multiple systems i,e in a
> network environment when
> it is installed in one system it cannot be accessed
> one other
> systems.
>
Nonsense!
Where did you get this stuff?
I have even played with MS SQL Server 2005 Express,
and it is not cri
At 4:15p -0400 on Tue, 03 Jun 2008, Aravind Chandu wrote:
> Is postgresql similar to sql server or does it supports
> network sharing i,e one one can access postgresql from any system
> irrespective on which system it is installed.
Postgres is an open source project and similarly is not bound by t
On Tue, Jun 03, 2008 at 01:15:13PM -0700, aravind chandu wrote:
> Microsoft sql server 2005
> cannot be shared on multiple systems i,e in a network environment when
> it is installed in one system it cannot be accessed one other
> systems.One can access only from a system where it is alrea
Hi,
My question is
Microsoft sql server 2005
cannot be shared on multiple systems i,e in a network environment when
it is installed in one system it cannot be accessed one other
systems.One can access only from a system where it is already installed
but not on the system
Hi,
I think, this is the wrong list, it appears to be a PHP error.
Anyway, try to put the global $_SERVER['SCRIPT_NAME'] into {}brackets:
list($page_id)=sqlget("select page_id from pages where
name='{$_SERVER['SCRIPT_NAME']}'");
Hope, You're not lost anymore ...
Ludwig
PJ schrieb:
I'm using
All paths of optimizer are just in function "standard_planner", which mainly
calls "subquery_planner", which just takes the rewrited structure "Query" as
the main parameter. But system provides another way if you wannt to write
your own optimizer, that is: define the global var "planner_hook" to yo
Bruce Momjian wrote:
Added to TODO:
* Allow XML to accept more liberal DOCTYPE specifications
Is any form of DOCTYPE accepted?
We're getting errors on the second line like this:
http://host.domain/dtd/dotdisposition0_02.dtd";>
The actual host.domain value is resolved by DNS,
and wget of th
Bruce Momjian wrote:
Added to TODO:
* Allow XML to accept more liberal DOCTYPE specifications
Is any form of DOCTYPE accepted?
We're getting errors on a second line in an XML document that
starts like this:
http://host.domain/dtd/dotdisposition0_02.dtd";>
The actual host.domain value is re
Bruce Momjian wrote:
Added to TODO:
* Allow XML to accept more liberal DOCTYPE specifications
Is any form of DOCTYPE accepted?
We're getting errors on the second line like this:
http://host.domain/dtd/dotdisposition0_02.dtd";>
The actual host.domain value is resolved by DNS,
and wget of th
On Tue, 2008-06-03 at 13:06 -0500, Mason Hale wrote:
> On Tue, Jun 3, 2008 at 12:04 PM, Jeff Davis <[EMAIL PROTECTED]> wrote:
> > On Tue, 2008-06-03 at 09:43 -0500, Mason Hale wrote:
> >> I've been working on partitioning a rather large dataset into multiple
> >> tables. One limitation I've run i
On Tue, Jun 3, 2008 at 12:04 PM, Jeff Davis <[EMAIL PROTECTED]> wrote:
> On Tue, 2008-06-03 at 09:43 -0500, Mason Hale wrote:
>> I've been working on partitioning a rather large dataset into multiple
>> tables. One limitation I've run into the lack of cross-partition-table
>> unique indexes. In my
On Tue, 2008-06-03 at 09:43 -0500, Mason Hale wrote:
> I've been working on partitioning a rather large dataset into multiple
> tables. One limitation I've run into the lack of cross-partition-table
> unique indexes. In my case I need to guarantee the uniqueness of a
> two-column pair across all pa
A few weeks back one of my PostgreSQL servers logged the following errors
during the nightly dump:
pg_dump: ERROR: invalid memory alloc request size 4294967293
pg_dump: SQL command to dump the contents of table "" failed: PQendcopy()
failed.
pg_dump: Error message from server: ERROR: inva
I'm using php5, postgresql 8.3, apache2.2.8, FreeBSD 7.0
I don't understand the message:
*Parse error*: syntax error, unexpected T_ENCAPSED_AND_WHITESPACE,
expecting T_STRING or T_VARIABLE or T_NUM_STRING
the guilty line is:
list($page_id)=sqlget("
select page_id from pages where name=
I have implemented partitioning using inheritance following the proposed
solution here (using trigger):
http://www.postgresql.org/docs/8.3/interactive/ddl-partitioning.html
My problem is that when my Hibernate application inserts to the master
table, postgres returns "0 rows affected", which caus
I've been working on partitioning a rather large dataset into multiple
tables. One limitation I've run into the lack of cross-partition-table
unique indexes. In my case I need to guarantee the uniqueness of a
two-column pair across all partitions -- and this value is not used to
partition the table
On Tue, 3 Jun 2008, Rob Johnston wrote:
> Just wondering if this is expected behaviour. When executing a query in
> the form of:
>
> select column from table join table using (column) and column = clause
>
> pgsql (8.2) returns the following: syntax error at or near "and"
>
> Obviously, you can ge
On Tue, Jun 3, 2008 at 7:41 AM, Henrik <[EMAIL PROTECTED]> wrote:
>
> To be able to handle versions we always insert new folders even though
> nothing has changed but it seemd like the best way to do it.
>
> E.g
>
> First run:
>tbl_file 500k new files.
>tbl_folder 50k new rows.
>
Per Lauvås wrote:
> Hi
>
> I am running Postgres 8.2 on Windows 2003 server SP2.
>
> Every now and then (2-3 times a year) our Postgres service is down
> and we need to manually start it. This is what we find:
>
> In log when going down:
> 2008-06-02 13:40:02 PANIC: could not open file
> "pg_xl
Hi list,
I'm having a table with a lots of file names in it. (Aprox 3 million)
in a 8.3.1 db.
Doing this simple query shows that the statistics is way of but I can
get them right even when I raise the statistics to 1000.
db=# alter table tbl_file alter file_name set statistics 1000;
ALTER
3 jun 2008 kl. 15.23 skrev Bill Moran:
In response to Henrik <[EMAIL PROTECTED]>:
We are running a couple of 8.3.1 servers and the are growing a lot.
I have the standard autovacuum settings from the 8.3.1 installation
and we are inserting about 2-3 million rows every night and cleaning
out j
In response to Henrik <[EMAIL PROTECTED]>:
>
> We are running a couple of 8.3.1 servers and the are growing a lot.
>
> I have the standard autovacuum settings from the 8.3.1 installation
> and we are inserting about 2-3 million rows every night and cleaning
> out just as many every day.
Is t
Hi
I am running Postgres 8.2 on Windows 2003 server SP2.
Every now and then (2-3 times a year) our Postgres service is down and we need
to manually start it. This is what we find:
In log when going down:
2008-06-02 13:40:02 PANIC: could not open file
"pg_xlog/0001001C0081" (log fi
Just wondering if this is expected behaviour. When executing a query in
the form of:
select column from table join table using (column) and column = clause
pgsql (8.2) returns the following: syntax error at or near "and"
Obviously, you can get around this by using "where" instead of "and",
bu
In response to "Kimball Johnson" <[EMAIL PROTECTED]>:
>
> What is the normal solution in pgsql-land for making a serious number of
> rows unique across multiple databases?
>
>
>
> I mean particularly databases of different types (every type) used at
> various places (everywhere) on all platfo
Hi List,
We are running a couple of 8.3.1 servers and the are growing a lot.
I have the standard autovacuum settings from the 8.3.1 installation
and we are inserting about 2-3 million rows every night and cleaning
out just as many every day.
The database size rose to 80GB but after a dump/
I recall I came across similar issue on older (8.1 or 8.2) versions of
PostgreSQL some time ago. DB was pretty small so I dump-restored it
eventually, but it looks like a bug anyway.
I cannot reproduce it at 8.3.
--
Regards,
Ivan
On Mon, Jun 2, 2008 at 7:12 PM, Maxim Boguk <[EMAIL PROTECTED]>
>
>
> Ahh. I think you can use this effectively but not the way you're
> describing.
>
> Instead of writing the wal directly to persistentFS what I think you're
> better
> off doing is treating persistentFS as your backup storage. Use "Archiving"
> as
> described here to archive the WAL files to pe
"Ram Ravichandran" <[EMAIL PROTECTED]> writes:
> The problem that I am facing is that EC2 has no persistent storage (at least
> currently). So, if the server restarts for some reason, all data on the
> local disks are gone. The idea was to store the tables on the non-persistent
> local disk, and d
Bonjour à tous,
Ci-dessous une offre d'emploi autour de PostgreSQL.
EDD, société leader dans la diffusion des contenus de presse dématérialisés
et partenaire privilégié des entreprises de presse françaises, recherche un
architecte/administrateur de bases de données PostgreSQL.
Au se
On Tue, 2008-06-03 at 00:04 -0400, Ram Ravichandran wrote:
> This seems like a much better idea. So, I should
> a) disable synchronous_commit
> b) set wal_writer_delay to say 1 minute (and leave fsync on)
> c) symlink pg_xlog to the PersistentFS on S3.
>
a) sounds good. b) has a max setting o
46 matches
Mail list logo