.
But this values might include numbers incurred by other concurrent sessions.
Is there any clear manner to entirely focus on the performance of
logical decoding?
Regards,
Weiping
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http
, if no space, then a few WAL records would
be moved to kernel cache (buffer cache). Shall I also set
vm.dirty_background_ratio = 5 and vm.dirty_ratio = 80 to avoid disk I/Os?
Looking forward to your kind help.
Best,
Weiping
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
based change data capture.
Weiping
On 27.10.2017 14:03, Francisco Olarte wrote:
On Fri, Oct 27, 2017 at 12:04 PM, Weiping Qu <q...@informatik.uni-kl.de> wrote:
That's a good point and we haven't accounted for disk caching.
Is there any way to confirm this fact in PostgreSQL?
I doubt, as it
That's a good point and we haven't accounted for disk caching.
Is there any way to confirm this fact in PostgreSQL?
Weiping
On 27.10.2017 11:53, Francisco Olarte wrote:
On Thu, Oct 26, 2017 at 10:20 PM, Weiping Qu <q...@informatik.uni-kl.de> wrote:
However, the plots showed different
overhead is incurred over
PostgreSQL, which leads to higher throughput.
That's the reason why I asked the previous question, whether logical
slot is implemented as queue.
Without continuous dequeuing the "queue" get larger and larger, thus
lowering the OLTP workload.
Regards
Dear postgresql community,
I have a question regarding understanding the implementation logic
behind logical replication.
Assume a replication slot created on the master node, will more and more
data get piled up in the slot and the size of replication slot
continuously increase if there is
Hello Artur,
Thank you for your reply.
Should it work in a stable version like Postgresql 9.4, since it's
enough for me and I don't care whether it's 9.6 or 9.5.
Nevertheless I will try it using 9.4.
Regards,
Weiping
On 01.03.2016 22:04, Artur Zakirov wrote:
Hello, Weiping
It seems
txn->commit_time returned was
always 0.
Could you help me by indicating me what could be wrong in my case? Any
missing parameters set?
Thank you in advance,
Kind Regards,
Weiping
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription
txn->commit_time returned was
always 0.
Could you help me by indicating me what could be wrong in my case? Any
missing parameters set?
Thank you in advance,
Kind Regards,
Weiping
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription
,
Kind Regards,
Weiping
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I think bytea is a little bit slower then large object.
Regards
Laser
If speed (add/get) is the only concern, image files could be big (~10M),
and database only serves as storage. In the postgresql 8, which type
(bytea vs large object) is the preferred one? Is it true, in general,
that bytea
wrote:
hi,pgsql-genera
I am chinese user, I have installed thd PostGreSQL 8.0 for win in my
computer, it's very good.
but I find a problem, when I use select * from dcvalue where
text_value='' to search record,
the system return no results,
seems like your locale setting doesn't match
Hi,
A problem we are facing is although we revoke all from
one database user, he can still see the table exists
in psql using \dt command, but he can'd select * from it
of course, how could we hide the table name listing from
him?
We are using 7.4.x and 8.0 beta, with ODBC, JDBC and libpq.
hi,
I'm using CVS source built postgres, may be one day later
then the main site, but found one problem:
I've set PGCLIENTENCODING environment before, for easy of
typing, like export PGCLIENTENCODING=GBK in my .profile,
but after I upgrade my postgresql to current CVS, I found
problem, the
It's slow due to several things happening all at once. There are a lot
of inserts and updates happening. There is periodically a bulk insert
of 500k - 1 mill rows happening. I'm doing a vacuum anaylyze every
hour due to the amount of transactions happening, and a vacuum full
every night. All this
Hi,
while upgrade to 8.0 (beta3) we got some problem:
we have a database which encoding is UNICODE,
when we do queries like:
select upper(''); --select some multibyte character,
then postgresql response:
ERROR: invalid multibyte character for locale
but when we do it in a SQL_ASCII encoding
Tom Lane wrote:
What locale did you initdb in? The most likely explanation for this
is that the LC_CTYPE setting is not unicode-compatible.
emm, I initdb --no-locale, which means LC_CTYPE=C, but if I don't use it
there are
some other issue in multibyte comparing (= operator) operation, will
Weiping wrote:
Tom Lane wrote:
What locale did you initdb in? The most likely explanation for this
is that the LC_CTYPE setting is not unicode-compatible.
finally I get it work, while initdb, we should use matched locale
setting and database encoding, like:
initdb --locale=zh_CN.utf8 -E
could phppgadmin serve your purpose?
http://phppgadmin.sourceforge.net/
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
joining column's datatypes do not match
Tom Lane дµÀ:
weiping he [EMAIL PROTECTED] writes:
txn1: txn2:
begin; begin;
update table_a set col= col + 1; update table_a set col = col + 1;
end; end;
if two transaction begin at exact the same time,
what's the result of 'col' after both transactions committed
in Read committed
suppose I've got two table:
laser_uni=# \d t1
Table public.t1
Column | Type | Modifiers
+--+---
name | text |
addr | text |
laser_uni=# \d t2
Table public.t2
Column | Type | Modifiers
+-+---
name | text|
len| integer |
of |
Weiping He wrote:
suppose I've got two table:
laser_uni=# \d t1
Table public.t1
Column | Type | Modifiers
+--+---
name | text |
addr | text |
laser_uni=# \d t2
Table public.t2
Column | Type | Modifiers
+-+---
name | text|
len
while remove --enable-thread-safety everything ok.
what's the matter?
the error output:
---8-
make[2]: Entering directory `/usr/laser/postgresql-7.4beta1/src/port'
gcc -O2 -g -Wall -Wmissing-prototypes
LitelWang wrote:
It is useful for me to use Chinese tone sort order .
Any version on Cygwin?
Thanks for any advice .
I never try GB18030 in Cygwin, but in Linux or other Unix system,
you may use gb18030 as client side encoding and use UNICODE as
backend encoding, and it's pretty good.
Tom Lane wrote:
Weiping He [EMAIL PROTECTED] writes:
I've met a wierd problem on a Solaris 8/sparc box with postgresql 7.3.3:
the server would automatically shutdown after a period of time of not
operating. The log show something like this:
pmdie 2
Assuming signal 2 is SIGINT
Tom Lane wrote:
Weiping He [EMAIL PROTECTED] writes:
Later I use:
pg_ctl start pgrun.log 21
to start the program, and it runs ok. but, then the pmdie 2...
Hm. My first thought was that you needed a /dev/null in there too,
but it looks like pg_ctl does that for you. The other likely
Firestar wrote:
Hi,
I'm currently using PostgreSQL 7.0 on Solaris. My Java program receives
strings in Big5
encoding and will store them in PostgreSQL (via JDBC). However, the inserted
strings become
multiple '?' (question marks) instead everytime i do a insert command. And
when i
Danny wrote:
- Hello
- I had previous experience with Access and MySQL.
-Situation
- I am trying to create the equvilant of the following which is a mysql
command.
- Queston
- But I cannot figure out how to do this is postgresql
"mysql -u root -p mydb mydb.dump"
I think:
psql -u
28 matches
Mail list logo