Thank you all a lot!
I have got it.
Best regards
2013/10/10 Stuart Bishop stu...@stuartbishop.net
On Wed, Oct 9, 2013 at 9:58 AM, 高健 luckyjack...@gmail.com wrote:
The most important part is:
2013-09-22 09:52:47 JST[28297][51d1fbcb.6e89-2][0][XX000]FATAL: Could
not
receive data
(file shipping)?
Best Regards
jian gao
2013/10/9 Adrian Klaver adrian.kla...@gmail.com
On 10/08/2013 07:58 PM, 高健 wrote:
Hello:
My customer encountered some connection timeout, while using one
primary-one standby streaming replication.
The original log is japanese, because
, 2013 at 12:02 PM, 高健 luckyjack...@gmail.com wrote:
Hello :
I found that for PG9.2.4, there is parameter max_wal_senders,
But there is no parameter of max_wal_receivers.
max_wal_senders is the maximum number of WAL sender processes that a
primary server can create in response to requests
Hello:
Sorry for disturbing:
I have one question about checkponint . That is : can checkpoint be
parallel?
It is said that checkpoint will be activated according to either conditions:
1)After last checkpoint, checkpoint_timeout seconds passed.
2)When shared_buffers memory above
Hello
My customer asked me about the relationship about PostgreSQL's following
process:
wal writer process writer process
checkpoint process
Currently My understanding is:
If I execute some DML, then,Firstly , the related operation or data will be
written to wal buffer.
Secondly, the related
Hello:
My customer encountered some connection timeout, while using one
primary-one standby streaming replication.
The original log is japanese, because there are no error-code like oracle's
ora-xxx,
I tried to translate the japanese information into English, But that might
be not correct
jian gao
2013/10/9 Jeff Janes jeff.ja...@gmail.com
On Tue, Oct 8, 2013 at 1:54 AM, 高健 luckyjack...@gmail.com wrote:
Hello:
Sorry for disturbing:
I have one question about checkponint . That is : can checkpoint be
parallel?
PostgreSQL does not currently implement it that way
Hello :
I found that for PG9.2.4, there is parameter max_wal_senders,
But there is no parameter of max_wal_receivers.
Is that to say, that If max_wal_senders are 3.
Then 3 wal_senders will be activated ,
And then on the standby server, there will be 3 receivers for
counter-part ?
Hello:
Sorry for disturbing,
In order to make my question clear,
I wrote this one as a seperate question.
If using cgroup, I can find wget work well.
But , for postgresql, when I deal huge amount of data, it still report out
of memory error. In fact I hope postgresql can work under a limit
388
-/+ buffers/cache:272 1734
Swap: 4031129 3902
[postgres@cent6 Desktop]$
Best Regards
2013/9/9 高健 luckyjack...@gmail.com
Hello:
Sorry for disturbing,
In order to make my question clear,
I wrote this one as a seperate question.
If using cgroup
not work.
Best Regards
2013/9/3 高健 luckyjack...@gmail.com
Thanks, I'll consider it carefully.
Best Regards
2013/9/3 Jeff Janes jeff.ja...@gmail.com
On Sun, Sep 1, 2013 at 6:25 PM, 高健 luckyjack...@gmail.com wrote:
To spare memory, you would want to use something like:
insert into test01
Thanks, I'll consider it carefully.
Best Regards
2013/9/3 Jeff Janes jeff.ja...@gmail.com
On Sun, Sep 1, 2013 at 6:25 PM, 高健 luckyjack...@gmail.com wrote:
To spare memory, you would want to use something like:
insert into test01 select generate_series,
repeat(chr(int4(random()*26)+65
, Is
ulimit command a good idea for PG?
Best Regards
2013/9/1 Jeff Janes jeff.ja...@gmail.com
On Fri, Aug 30, 2013 at 2:10 AM, 高健 luckyjack...@gmail.com wrote:
postgres=# insert into test01 values(generate_series(1,2457600),repeat(
chr(int4(random()*26)+65),1024));
The construct values (srf1
Hello:
I have done the following experiment to test :
PG's activity when dealing with data which is bigger in size than total
memory of the whole os system.
The result is:
PG says:
WARNING: terminating connection because of crash of another server process
DETAIL:
Hello:
Thank you all.
I have understood this.
Best Regards
2013/8/31 Kevin Grittner kgri...@ymail.com
高健 luckyjack...@gmail.com wrote:
So I think that in a mission critical environment, it is not a
good choice to turn full_page_writes on.
If full_page_writes is off, your database can
,
the other data will still remain in backend process's memory.
Is my understanding right?
Best Regard
2013/8/27 Jeff Janes jeff.ja...@gmail.com
On Sun, Aug 25, 2013 at 11:08 PM, 高健 luckyjack...@gmail.com wrote:
Hello:
Sorry for disturbing.
I am now encountering a serious problem: memory
Hello
Thanks for replying.
It is really a complicated concept.
So I think that in a mission critical environment , it is not a good choice
to turn full_page_writes on.
Best Regards
2013/8/27 Jeff Janes jeff.ja...@gmail.com
On Sun, Aug 25, 2013 at 7:57 PM, 高健 luckyjack...@gmail.com wrote
Hello:
Sorry for disturbing.
I am now encountering a serious problem: memory is not enough.
My customer reported that when they run a program they found the totall
memory and disk i/o usage all reached to threshold value(80%).
That program is written by Java.
It is to use JDBC to pull out data
page to WAL ?
the id=1 val=1 data is very old, and even not in read into memory.
Why it should be from disk--memory--wal by wal writer?
maybe I have many mis-understanding about it. Thanks for replying!
Best Regards
2013/8/23 Alvaro Herrera alvhe...@2ndquadrant.com
高健 escribió
Hi:
Thank you all for kindly replying.
I think that I need this: pg_stat_user_tables.n_tup_hot_upd
And Adrian's information is a pretty good material for me to understand
the internal.
Best regards
2013/8/22 Adrian Klaver adrian.kla...@gmail.com
On 08/21/2013 07:20 PM, 高健 wrote
Hello:
Sorry for disturbing.
I have one question : Will checkpoint cause wal written happen?
I found the following info at:
http://www.postgresql.org/docs/9.2/static/wal-configuration.html
...
Checkpoints are fairly expensive, first because they require writing out
all currently dirty
Hi:
I have heard that Heap-Only Tuples is introduce from 8.3.
And I am searching information for it.
How can I get a detailed information of HOT?
For example:
according to a table, How many tuples are heap only tuples , and how many
are not?
And also , Is there any options which can
Hello:
I have found the following wiki about autonomous transaction:
https://wiki.postgresql.org/wiki/Autonomous_subtransactions
But when I test it, I found the following error:
pgsql=# BEGIN;
BEGIN
pgsql=# INSERT INTO tab01 VALUES (1);
INSERT 0 1
pgsql=# BEGIN SUBTRANSACTION;
Thank you !
Best Regards
2013/7/9 Adrian.Vondendriesch adrian.vondendrie...@credativ.de
Hello,
Am 09.07.2013 11:29, schrieb 高健:
Hello:
I have found the following wiki about autonomous transaction:
https://wiki.postgresql.org/wiki/Autonomous_subtransactions
But when I
Hello:
I have question for cmin and cmax.
It is said:
cminis: The command identifier (starting at zero) within the
inserting transaction.
cmax is: The command identifier within the deleting transaction, or
zero.
http://www.postgresql.org/docs/9.1/static/ddl-system-columns.html
));
break;
code end---
2013/7/2 高健 luckyjack...@gmail.com
Hello:
I have question for cmin and cmax.
It is said:
cminis: The command identifier (starting at zero) within the
inserting transaction.
cmax is: The command identifier within the deleting transaction
a process is working, it get transaction id from
system, then use it as xmin when inserting a record.
Why the proc-xmin can be 0 ? Is it a bug?
2013/6/25 高健 luckyjack...@gmail.com
Hello:
Sorry for disturbing again.
I traced source code of PG, and found that:
When the 「create index
(old_snapshots[i]) running, index
creation is blocked.
For the similar sql statement, the source code running logic differs, I
think that there might be something wrong in the source code.
2013/6/21 高健 luckyjack...@gmail.com
Thanks Jeff
But What I can't understand is:
In My first test
Hello:
I have question about PG's create index concurrently. I think it is a
bug perhaps.
I make two tables tab01 and tab02, they have no relationships.
I think create index concurrently on tab02 will not be influenced by
transaction on tab01.
But the result differs:
My first
yours
Jian
2013/6/21 Jeff Janes jeff.ja...@gmail.com
On Thu, Jun 20, 2013 at 1:27 AM, 高健 luckyjack...@gmail.com wrote:
Hello:
I have question about PG's create index concurrently. I think it is a
bug perhaps.
I make two tables tab01 and tab02, they have no relationships.
I think
.
Thank you . I think it is an exciting point for PG.
This make it clever to choice those always executed sql.
Thanks!
2013/6/18 Albe Laurenz laurenz.a...@wien.gv.at
高健 wrote:
I change my Java program by adding the following:
org.postgresql.PGStatement pgt = (org.postgresql.PGStatement)pst
Hello:
I have some questions about parameterized path.
I have heard that it is a new feature in PG9.2.
I digged for information of parameterized path, but found few(maybe my
method is not right).
My FIRST question is:
What is parameterized path for?
Is the following a correct example
when I put different parameter, it just execute the same
finished plan.
2013/6/19 Jeff Janes jeff.ja...@gmail.com
On Tue, Jun 18, 2013 at 2:09 AM, 高健 luckyjack...@gmail.com wrote:
postgres=# explain execute s(2);
QUERY PLAN
it and hold the plan.
-
Is my understanding right?
Thanks
2013/6/17 Albe Laurenz laurenz.a...@wien.gv.at
高健 wrote:
I have one question about prepared statement.
I use Java via JDBC
Frost sfr...@snowman.net
* 高健 (luckyjack...@gmail.com) wrote:
So I can draw a conclusion:
Prepared statement is only for use in the same session at which it has
been executed.
Prepared statements are session-local.
It can not be shared via multiple sessions.
Correct
Hello:
I have one question about prepared statement.
I use Java via JDBC, then send prepared statement to execute.
I thought that the pg_prepared_statments view will have one record after
my execution.
But I can't find.
Is the JDBC's prepared statement differ from SQL execute by
Hello everybody:
Sorry for disturbing.
I experience the prepared statement of postgresql via psql and have one
question:
In terminal A:
I prepared:
postgres=# prepare test(int) AS
postgres-# select * from customers c where c.cust_id = $*1*;
PREPARE
postgres=#
Then run:
postgres=#
Hi :
Sorry for replying lately.
I tried to take the commit statement out of the function , and it works
well.
Thank you!
2013/6/10 Kevin Grittner kgri...@ymail.com
高健 luckyjack...@gmail.com wrote:
CREATE OR REPLACE Function ...
BEGIN
BEGIN
UPDATE ...
COMMIT
Hi :
Sorry for disturbing. I don't know if it is ok to put this question here.
I want to learn more about hash join's cost calculation.
And I found the following function of PostgreSQL9.2.1. The hash join cost
is calculated.
But what confused me is a reuction calculation:
sfr...@snowman.net writes:
* 高健 (luckyjack...@gmail.com) wrote:
Why the reduction is needed here for cost calculation?
cost_qual_eval(hash_qual_cost, hashclauses, root);
returns the costs for *just the quals which can be used for the
hashjoin*, while
cost_qual_eval
Hello:
Would somebody please kindly tell why my function run but can't update
table via cursor:
I have table like this:
create table course_tbl(course_number integer, course_name varchar(4),
instructor varchar(10));
insert into course_tbl values (1,'','TOM'), (2,'','JACK');
Hello:
I created a table, and found the file created for that table is about 10
times of that I estimated!
The following is what I did:
postgres=# create table tst01(id integer);
CREATE TABLE
postgres=#
postgres=# select oid from pg_class where relname='tst01';
oid
---
16384
(1 row)
-autovacuum.html
2013/5/24 高健 luckyjack...@gmail.com
Hello all:
I found that during postgresql running, there are so many processes
being created and then died.
I am interested in the reason.
Here is the detail:
I installed from postgresql-9.2.1.tar.bz2.
I put some debug code in fd.c 's
Hello all:
I found that during postgresql running, there are so many processes being
created and then died.
I am interested in the reason.
Here is the detail:
I installed from postgresql-9.2.1.tar.bz2.
I put some debug code in fd.c 's PathNameOpenFile function:
fprintf(stderr,+++While Calling
think.
I hope in future the architecture of PostgreSQL can put the committed data
uncommitted data apart,
Or even put them in separate physical disks.That will Help to improve
performance I think.
Jian Gao
2012/11/9 Albe Laurenz laurenz.a...@wien.gv.at
高健 wrote:
I have one question about
Hi all:
I made partition tables:
postgres=# create table ptest(id integer, name varchar(20));
CREATE TABLE
postgres=# create table ctest01(CHECK(id500)) inherits (ptest);
CREATE TABLE
postgres=# create table ctest02(CHECK(id=500)) inherits (ptest);
CREATE TABLE
postgres=#
postgres=#
width=9)
Index Cond: (id = 600)
(14 rows)
postgres=#
2012/11/12 Craig Ringer cr...@2ndquadrant.com
On 11/12/2012 10:39 AM, 高健 wrote:
The selection used where condition for every partition table, which
is not what I want. my rule is just for id column value
Hi all:
I have one question about the visibility of explain plan.
Firstly , I was inserting into data to a table. I use : [ insert into
ptest select * from test02; ]
And test02 table has 10,000,000 records. And ptest is a parent table,
which has two distribution child table ---
Hi Jeff
Thank you for your reply.
I will try to learn about effective_cache_size .
Jian gao
2012/11/9 Jeff Janes jeff.ja...@gmail.com
On Wed, Nov 7, 2012 at 11:41 PM, 高健 luckyjack...@gmail.com wrote:
Hi all:
What confused me is that: When I select data using order by clause, I
got
, 2012 at 11:17 PM, 高健 luckyjack...@gmail.com wrote:
Hi all:
I want to see the explain plan for a simple query. My question is :
How
is the cost calculated?
The cost parameter is:
random_page_cost= 4
seq_page_cost = 1
cpu_tuple_cost
Hi all:
I have one question about the cache clearing.
If I use the following soon after database startup(or first time I use it):
postgres=# explain analyze select id,deptno from gaotab where id=200;
QUERY
PLAN
Hi tom
At frist I have thought that the database parsed my explain statement,
so the pre-compiled execution plan will be re-used , which made the
statement's second run quick.
I think that what you said is right.
Thank you
2012/11/7 Tom Lane t...@sss.pgh.pa.us
=?UTF-8?B?6auY5YGl?=
Hi all:
I want to see the explain plan for a simple query. My question is : How
is the cost calculated?
The cost parameter is:
random_page_cost= 4
seq_page_cost = 1
cpu_tuple_cost =0.01
cpu_operator_cost =0.0025
And the table and its index
Hi all:
What confused me is that: When I select data using order by clause, I got
the following execution plan:
postgres=# set session
enable_indexscan=true;
SET
postgres=# explain SELECT * FROM pg_proc ORDER BY
oid;
QUERY
PLAN
Hi all:
I am trying to understand when the bgwriter is written.
I thought that the bgwriter.c's calling turn is:
BackgroundWriterMain -BgBufferSync- SyncOneBuffer
And In my postgresql.conf , the bgwriter_delay=200ms.
I did the following:
postgres=# select * from testtab;
id | val
Hi:
I am now reading the bgwriter.c’s source code, and found :
pqsignal(SIGHUP, BgSigHupHandler); /* set flag to read config file */
So I tried to use Kill –s SIGHUP to confirm it.
I found that if I directly send SIGHUP to bgwriter, it has no response.
If I send SIGHUP to its parent—postgres,
On /src/include/storage/proc.h:
I saw the following line:
extern PGDLLIMPORT PGPROC *MyProc;
I want to know why PGDLLIMPORT is used here?
Does it mean: exten PGPROC *MyProc; right?
In src/backend/postmaster/bgwriter.c , I can find the following source
code(PostgreSQL9.2):
/*
* GUC parameters
*/
int BgWriterDelay = 200;
...
rc = WaitLatch(MyProc-procLatch,
WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH,
BgWriterDelay /* ms */ );
...
if (rc == WL_TIMEOUT
I am new to PostgreSQL's SPI(Server Programming Interface).
I can understand PostgreSQL's exampel of using SPI. But I am not sure about
SPI_prepare's parameter.
void * SPI_prepare(const char * command, int nargs, Oid * argtypes)
Can somebody kindly give an example of using SPI_prepare ?
59 matches
Mail list logo