liuyuanyuan wrote:
> By the way, my project is about migrating Oracle data of BLOB type to
> PostgreSQL database. The out of memory error occurred between migrating
> Oracle BLOB to PostgreSQL bytea. Another question, if I can't migrate BLOB to
> bytea,
> how about oid type ?
Large Objects (I g
The company I work for has a client who has expressed interest in having a
staging database for their data. Staging as in they add data, do some
QCing, then push to a database with an identical schema to make it public.
Fortunately, we're not doing that right now, but it's something they may
want l
Thanks. I increased the wal_keep_segments and it works well now.
On 7 Aug 2013, at 12:43 AM, Jerry Sievers wrote:
> "ascot.m...@gmail.com" writes:
>
>> Hi,
>>
>> I just tried another round of tests, without running "sync; echo 3 >
>> /proc/sys/vm/drop_caches',
>> still got the same error,
Hi!
Thanks for all of your interest!
My test PC is Win7 (64-bit), and equipped with 8GB of memory.
In this java project, I configured VM option as:
-D java.security.policy=applet.policy -Xms1280m -Xmx1536m.
And anything needed in the project is in the server descripted above.
I insert
By
Hi, Tomas Vondra!
Thanks for all of your help!
My test PC is Win7 (64-bit), and equipped with 8GB of memory.
In this java project, I configured VM option as:
-D java.security.policy=applet.policy -Xms1280m -Xmx1536m.
Any thing you want to know, just write to me!
Best Regard!
Liu Yuanyuan
On Tue, Aug 6, 2013 at 2:36 PM, Derek Poon wrote:
> The performance impact of the enhanced comparator would probably be
> negligible, compared to I/O bottlenecks. A bigger issue would be backwards
> compatibility, especially for ltrees with existing btree indexes.
>
> Feedback? Suggestions?
U
On Tue, Aug 6, 2013 at 4:17 PM, Carlos Henrique Reimer
wrote:
> I have tried to drop the index and the reindex procedure but both fail with
> the same exit code.
>
> Copied the data directory to another partition on same HD but same results.
>
> Next change window will install PG 8.2.23 in another
On 08/06/2013 04:17 PM, Carlos Henrique Reimer wrote:
Hi,
Thank you for the feedback!
I have tried to drop the index and the reindex procedure but both fail
with the same exit code.
Copied the data directory to another partition on same HD but same results.
Next change window will install PG
Hi,
Thank you for the feedback!
I have tried to drop the index and the reindex procedure but both fail with
the same exit code.
Copied the data directory to another partition on same HD but same results.
Next change window will install PG 8.2.23 in another Windows box and copy
the data director
Hi,
I'm looking for a data type to store numerically labelled hierarchical data,
such as section.subsection.paragraph numbers (e.g. '1.5.3', '1.10.2', '6.30').
The closest data type that I have found is ltree. However, the collation order
is inappropriate: it would put '1.10.2' before '1.5.3',
On 8/6/2013 12:24 PM, John McKown wrote:
Too bad the PostgreSQL server cannot be in a "Federated" configuration
so that it "knows" which databases are controlled by which server and
automatically passes the requests around. Or does it and I just can't
find it?
http://en.wikipedia.org/wiki/Fede
On Tue, Aug 06, 2013 at 12:08:57PM -0700, Steve Atkins wrote:
>
> You can't do it by sharing the disk files, at all. The two instances will
> trash each others data.
>
Right. This is why products that do this sort of hardware fail-over
have something akin to the "STONITH" (Shoot The Other Node
On 2013-08-06 15:28, John McKown wrote:
Me again. Perhaps what is needed, in this case, is for a "distributor"
which "looks like" a PostgreSQL server running on a given system (i.e.
it is listening on the default TCPIP ports and UNIX sockets and
) but would simply act like a pipe to and
from t
Me again. Perhaps what is needed, in this case, is for a "distributor"
which "looks like" a PostgreSQL server running on a given system (i.e. it
is listening on the default TCPIP ports and UNIX sockets and ) but would simply act like a pipe to and from the
real server running somewhere else.
--
A
OOPS - send to Davide directly. Sorry about that. Didn't look at the To:
I'm new here today.
-- Forwarded message --
From: John McKown
Date: Tue, Aug 6, 2013 at 2:23 PM
Subject: Re: [GENERAL] Sharing data directories between machines
To: Davide Setti
Given that the user seems t
On 8/6/2013 10:45 AM, JD Wong wrote:
I tried moving the data directory over to the mounted drive, and
pointing both postgresql.confs to that one. I was able to have both
access the same databases, but they can't share changes. It's like
they're running on two separate data directories, even
On Aug 6, 2013, at 10:45 AM, JD Wong wrote:
> Hi all!
>
> I have two servers, which share a large mounted drive. I would like to share
> postgres databases between them dynamically so that when one makes changes,
> they are immediately available in the other.
>
> I tried moving the data d
You can't just make them share the data dir (for example: what about
caches in memory?)
Probably what you want is streaming replication:
http://wiki.postgresql.org/wiki/Streaming_Replication
Regards.
On Tue, Aug 6, 2013 at 7:45 PM, JD Wong wrote:
> Hi all!
>
> I have two servers, which share a
Hi all!
I have two servers, which share a large mounted drive. I would like to
share postgres databases between them dynamically so that when one makes
changes, they are immediately available in the other.
I tried moving the data directory over to the mounted drive, and pointing
both postgresql.
Hi,
i'm trying to speedup 3 kind of query on citext arrays (PGSQL 9.2).
* element lookup (eg. 'value' = ANY("citext_array_col"))
* substring search in any element
* substring search at the beginning of any element
Are GIN indexes the right tool for this job? I tried building an
operator class usi
On Tue, Aug 6, 2013 at 7:04 AM, Tomas Vondra wrote:
> Hi,
>
> On 6 Srpen 2013, 9:12, liuyuanyuan wrote:
> > Error detail:
> > org.postgresql.util.PSQLException: Error: out of memory
> > Details:Failed on request of size 268443660.
>
> Seems like an issue with the OS, not with PostgreSQL, to me.
"ascot.m...@gmail.com" writes:
> Hi,
>
> I just tried another round of tests, without running "sync; echo 3 >
> /proc/sys/vm/drop_caches',
> still got the same error, following FATAL errors are found in pg_log
> (slave), can anyone please advise how to resolve
> this error?
>
> regards
>
> LO
A (final?) version using COALESCE (It wasn't too long to
post at the blog now; I am also posting here for belt
and suspenders reasons...):
-- group_concat.sql
-- permutation of GROUP_CONCAT parameter types with
delimiter parameter furnished:
CR
Hi,
I just tried another round of tests, without running "sync; echo 3 >
/proc/sys/vm/drop_caches',
still got the same error, following FATAL errors are found in pg_log (slave),
can anyone please advise how to resolve this error?
regards
LOG: entering standby mode
LOG: consistent recovery
You're saying as oppose to straight SQL? I don't think so; but I had
defined it as such just in case there was some functional benefit that I
might be unaware of...
On 08/06/2013 01:26 AM, Alvaro Herrera wrote:
> Pavel Stehule escribió:
>
>> you code can be translated to
>
That is good and I will effect that change here; but as far as I can see
you will still need to permute the types of the arguments in that set of
functions in order for the aggregates to accept an int argument and an
optional text argument, and zap out a TEXT...
On 08/06/2
On Tue, Aug 6, 2013 at 8:45 AM, edlef wrote:
> Hi,
>
> I looked into other log files, but there is no regularity. I found intervals
> when it restarted 2-3 time in a hour.
> What I pasted was a full days log entrys, they were not snipets. There are
> no entries about the shot down.
> I try to find
On 08/06/2013 07:45 AM, edlef wrote:
Hi,
I looked into other log files, but there is no regularity. I found
intervals when it restarted 2-3 time in a hour.
What I pasted was a full days log entrys, they were not snipets. There
are no entries about the shot down.
I try to find when it all began.
Hi,
I looked into other log files, but there is no regularity. I found
intervals when it restarted 2-3 time in a hour.
What I pasted was a full days log entrys, they were not snipets. There
are no entries about the shot down.
I try to find when it all began.
fenor
2013.08.06. 16:32 keltezéss
On Tue, Aug 6, 2013 at 7:56 AM, edlef wrote:
> Hi,
>
> Our client has began to complain for a few days that the application can not
> connect to the database more times.
> I looked into the logs and as far as I can see, postgresql restarts more
> times a day. I made a vacuum (full, freeze, analyze
On 08/06/2013 06:56 AM, edlef wrote:
Hi,
Our client has began to complain for a few days that the application can
not connect to the database more times.
I looked into the logs and as far as I can see, postgresql restarts more
times a day. I made a vacuum (full, freeze, analyze) on the database
yes, I knew that, but thanks!
2013.08.06. 16:03 keltezéssel, Klaus Ita írta:
well, in your statement, the parameters are missing.
2013-08-05 10:18:01 STATEMENT: select distinct count(*) as rownum
from minta as m join partner as p on ( m.partnerid = p.partnerid )
join mintajelleg as mj on
On 08/06/2013 06:56 AM, edlef wrote:
Hi,
Our client has began to complain for a few days that the application can
not connect to the database more times.
I looked into the logs and as far as I can see, postgresql restarts more
times a day. I made a vacuum (full, freeze, analyze) on the database
Hi,
On 6 Srpen 2013, 9:12, liuyuanyuan wrote:
> Error detail:
> org.postgresql.util.PSQLException: Error: out of memory
> Details:Failed on request of size 268443660.
Seems like an issue with the OS, not with PostgreSQL, to me.
What OS and HW are you using? How much memory you have and do you
well, in your statement, the parameters are missing.
2013-08-05 10:18:01 STATEMENT: select distinct count(*) as rownum from
minta as m join partner as p on ( m.partnerid = p.partnerid ) join
mintajelleg as mj on ( mj.mintajellegid = m.mintajellegid ) where
m.status IN and m.mintajellegid =
Hi,
Our client has began to complain for a few days that the application can
not connect to the database more times.
I looked into the logs and as far as I can see, postgresql restarts more
times a day. I made a vacuum (full, freeze, analyze) on the database and
reindexed everything, but it lo
>I got a out of memory problem, when I tried to insert a binary file (256MB)
to bytea column;
>I want to get a way to insert files (vary from 1byte to 2GB) or byte array
or binary stream into
>PostgreSQL bytea field, never cause out of memory. Fellowed by the details.
>Anybody know about this, ple
(2013/08/06 20:19), Florian Weimer wrote:
The first file name resolution is slow, but subsequent resolutions typically
happen from the dentry cache. (The cache is not populated when the directory is
opened.)
I see. I understand why ext file system is slow when we put large number of
files.
Tha
On 08/06/2013 12:28 PM, KONDO Mitsumasa wrote:
(2013/08/05 20:38), Florian Weimer wrote:
On 08/05/2013 10:42 AM, John R Pierce wrote:
On 8/5/2013 1:01 AM, KONDO Mitsumasa wrote:
When we open file, ext3 or ext4 file system seems to sequential search
inode for opening file in file directory.
n
Thanks, there was a typo of the line "hot_standby", it works now, thanks again.
On 6 Aug 2013, at 6:52 PM, Haribabu kommi wrote:
>
> On 06 August 2013 16:13 ascot.moss wrote
>> Hi,
>
>> I just setup the replication in the slave again, when trying to use psql, I
>> could not get the psql comman
(2013/08/06 19:33), Andres Freund wrote:
On 2013-08-06 19:19:41 +0900, KONDO Mitsumasa wrote:
(2013/08/05 21:23), Tom Lane wrote:
Andres Freund writes:
... Also, there are global
limits to the amount of filehandles that can simultaneously opened on a
system.
Yeah. Raising max_files_per_pro
On 06 August 2013 16:13 ascot.moss wrote
>Hi,
>I just setup the replication in the slave again, when trying to use psql, I
>could not get the psql command prompt but got "psql: FATAL: the database
>system is starting up" from it.
>PG: 9.2.4
>Below is the log from the the slave:
>LOG: datab
Hi,
I just setup the replication in the slave again, when trying to use psql, I
could not get the psql command prompt but got "psql: FATAL: the database
system is starting up" from it.
PG: 9.2.4
Below is the log from the the slave:
LOG: database system was shut down in recovery at 2013-08-0
On 2013-08-06 19:19:41 +0900, KONDO Mitsumasa wrote:
> (2013/08/05 21:23), Tom Lane wrote:
> > Andres Freund writes:
> >> ... Also, there are global
> >> limits to the amount of filehandles that can simultaneously opened on a
> >> system.
> >
> > Yeah. Raising max_files_per_process puts you at s
(2013/08/05 20:38), Florian Weimer wrote:
On 08/05/2013 10:42 AM, John R Pierce wrote:
On 8/5/2013 1:01 AM, KONDO Mitsumasa wrote:
When we open file, ext3 or ext4 file system seems to sequential search
inode for opening file in file directory.
no, ext3/4 uses H-tree structures to search direc
(2013/08/05 21:23), Tom Lane wrote:
> Andres Freund writes:
>> ... Also, there are global
>> limits to the amount of filehandles that can simultaneously opened on a
>> system.
>
> Yeah. Raising max_files_per_process puts you at serious risk that
> everything else on the box will start falling ov
(2013/08/05 19:28), Andres Freund wrote:
On 2013-08-05 18:40:10 +0900, KONDO Mitsumasa wrote:
(2013/08/05 17:14), Amit Langote wrote:
So, within the limits of max_files_per_process, the routines of file.c
should not become a bottleneck?
It may not become bottleneck.
1 FD consumes 160 byte in 6
Hello everyone!
I got a out of memory problem, when I tried to insert a binary file (256MB) to
bytea column;
I want to get a way to insert files (vary from 1byte to 2GB) or byte array or
binary stream into
PostgreSQL bytea field, never cause out of memory. Fellowed by the details.
Anybody
48 matches
Mail list logo