linebuf.data, "could not connect to ");
if (p1)
{
char *p2 = strchr(p1, ':');
if (p2)
memmove(p1 + 17, p2, strlen(p2) + 1);
}
}
```
Thanks,
Hubert
From: Tom Lane
Sent: Monday, January 11, 2021 10:56 AM
To: Hubert Zhang
Cc:
collector
in bgwriter process?
2. Is there any way we check the bgwriter is running on a standby but not in
hot stanby mode?
3. Is there any other process will send statistics except the bgwriter in
standby? We should fix it one by one or add a check in `pgstat_send` directly?
Thanks,
Hubert Zhang
PM
To: Hubert Zhang
Cc: pgsql-hack...@postgresql.org
Subject: RE: Multiple hosts in connection string failed to failover in non-hot
standby mode
Please send emails in text format. Your email was in HTML, and I changed this
reply to text format.
From: Hubert Zhang
> Libpq has sup
,
Hubert Zhang
as well. We print segno on the fly.
On Thu, Feb 20, 2020 at 2:33 PM Hubert Zhang wrote:
> Thanks,
>
> On Thu, Feb 20, 2020 at 11:36 AM Andres Freund wrote:
>
>> Hi,
>>
>> On 2020-02-19 16:48:45 +0900, Michael Paquier wrote:
>> > On Wed, Feb 19, 2020 at 03
Hi Konstantin,
I also vimdiff nodeAgg.c in your PG13 branch with nodeAgg.c in pg's main
repo.
Many functions has changed from PG96 to PG13, e.g. 'advance_aggregates',
'lookup_hash_entry'
The vectorized nodeAgg seems still follow the PG96 way of implementing
these functions.
In general, I think
On Wed, Feb 26, 2020 at 7:59 PM Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:
>
>
> On 26.02.2020 13:11, Hubert Zhang wrote:
>
>
>
>> and with JIT:
>>
>> 13.88% postgres postgres [.] tts_buffer_heap_getsomeattrs
Hi Konstantin,
On Tue, Feb 25, 2020 at 6:44 PM Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:
>
>
> On 25.02.2020 11:06, Hubert Zhang wrote:
>
> Hi Konstantin,
>
> I checkout your branch pg13 in repo
> https://github.com/zhangh43/vectorize_engine
> Af
provide
your compile option and the TPCH dataset size and your queries(standard
Q1?) to help me to debug on it.
On Mon, Feb 24, 2020 at 8:43 PM Hubert Zhang wrote:
> Hi Konstantin,
> I have added you as a collaborator on github. Please accepted and try
> again.
> I think non collaborato
Hi Konstantin,
I have added you as a collaborator on github. Please accepted and try again.
I think non collaborator could also open pull requests.
On Mon, Feb 24, 2020 at 8:02 PM Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:
>
>
> On 24.02.2020 05:08, Hubert Zhan
Hi
On Sat, Feb 22, 2020 at 12:58 AM Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:
>
>
> On 12.02.2020 13:12, Hubert Zhang wrote:
>
> On Tue, Feb 11, 2020 at 1:20 AM Konstantin Knizhnik <
> k.knizh...@postgrespro.ru> wrote:
>
>>
>> So loo
elation %u, file \"%s\"",
> blockNum, smgr->smgr_rnode.node.relNode, smgrfname()
>
> All of them are not compile-time constant at all.
>
>
I like your error message, the block number is relation level not file
level.
I 'll change the error message to
"invalid page in block %u of relation %u, file %s"
--
Thanks
Hubert Zhang
BM_ZERO_ON_ERROR) to control it.
To get rid of SetZeroDamagedPageInChecksum, one idea is to pass
zero_damaged_page flag into smgrread(), something like below:
==
extern void smgrread(SMgrRelation reln, ForkNumber forknum,
BlockNumber blocknum, char *buffer, int flag);
===
Any comments?
--
Thanks
Hubert Zhang
On Wed, Feb 12, 2020 at 5:22 PM Hubert Zhang wrote:
> Thanks Andres,
>
> On Tue, Feb 11, 2020 at 5:30 AM Andres Freund wrote:
>
>> HHi,
>>
>> On 2020-02-10 16:04:21 +0800, Hubert Zhang wrote:
>> > Currently we only print block number and relation path
columnar store.
I think when we support this extension
on master, we could try the new zedstore.
I'm not active on this work now, but will continue when I have time. Feel
free to join bring vops's feature into this extension.
Thanks
Hubert Zhang
Thanks Andres,
On Tue, Feb 11, 2020 at 5:30 AM Andres Freund wrote:
> HHi,
>
> On 2020-02-10 16:04:21 +0800, Hubert Zhang wrote:
> > Currently we only print block number and relation path when checksum
> check
> > fails. See example below:
> >
> > ERROR: inva
/656195, file
path base/65959/656195.2
Patch is attached.
--
Thanks
Hubert Zhang
0001-Print-physical-file-path-when-checksum-check-fails.patch
Description: Binary data
Thanks Konstantin,
Your suggestions are very helpful. I have added them into issues of
vectorize_engine repo
https://github.com/zhangh43/vectorize_engine/issues
On Wed, Dec 4, 2019 at 10:08 PM Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:
>
>
> On 04.12.2019 12:13, Hub
Thanks Konstantin for your detailed review!
On Tue, Dec 3, 2019 at 5:58 PM Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:
>
>
> On 02.12.2019 4:15, Hubert Zhang wrote:
>
>
> The prototype extension is at https://github.com/zhangh43/vectorize_engine
>
>
On Sun, Dec 1, 2019 at 10:05 AM Michael Paquier wrote:
> On Thu, Nov 28, 2019 at 05:23:59PM +0800, Hubert Zhang wrote:
> > Note that the vectorized executor engine is based on PG9.6 now, but it
> > could be ported to master / zedstore with some effort. We would
> appreciate
Hi Konstantin,
Thanks for your reply.
On Fri, Nov 29, 2019 at 12:09 AM Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:
> On 28.11.2019 12:23, Hubert Zhang wrote:
>
> We just want to introduce another POC for vectorized execution engine
> https://github.com/zhangh4
ster / zedstore with some effort. We would appreciate
some feedback before moving further in that direction.
Thanks,
Hubert Zhang, Gang Xiong, Ning Yu, Asim Praveen
/* then call ExecHashIncreaseNumBatches() to do the real spill */
}
/* probe stage */
tuple = ReadFromFile(S[i+Bi*k]);
batchno = NewExecHashGetBucketAndBatch()
if (batchno == curbatch)
probe and match
else
spillToFile(tuple, batchno)
}
```
This solution only split the batch which needs to be split in a lazy way.
If this solution makes sense, I would like write the real patch.
Any comment?
--
Thanks
Hubert Zhang
Hi all,
Is there any way to create a named portal except cursor in PG?
I tried postgres-jdbc driver and use PrepareStatement. Backend could
receive `bind` and `execute` message, but the portal name is still empty.
How can I specify the portal name?
--
Thanks
Hubert Zhang
Thanks, Thomas.
On Mon, Jul 8, 2019 at 6:47 AM Thomas Munro wrote:
> On Mon, Feb 18, 2019 at 7:39 PM Hubert Zhang wrote:
> > Based on the assumption we use smgr as hook position, hook API option1
> or option2 which is better?
> > Or we could find some balanced API between o
Hi Tomas,
Here is the patch, it's could be compatible with your patch and it focus on
when to regrow the batch.
On Tue, May 28, 2019 at 3:40 PM Hubert Zhang wrote:
> On Sat, May 4, 2019 at 8:34 AM Tomas Vondra
> wrote:
>
>> The root cause is that hash join treats batches as p
s).
nbatch_inmemory in your patch could also use the upper rule to redefine.
What's your opinion?
Thanks
Hubert Zhang
Thanks Tomas.
I will follow this problem on your thread. This thread could be terminated.
On Thu, May 16, 2019 at 3:58 AM Tomas Vondra
wrote:
> On Wed, May 15, 2019 at 06:19:38PM +0800, Hubert Zhang wrote:
> >Hi all,
> >
> >When we build hash table for a hash join node
htable->spaceAllowed, which is
the threshold to determine whether to increase batch number.
If batch split failed, we increase the penalty instead of just turn off the
growEnable flag.
Any comments?
--
Thanks
Hubert Zhang
0001-Using-growPenalty-to-replace-growEnable-in-hashtable.patch
Descriptio
Hi Andres
On Sat, Feb 16, 2019 at 12:53 PM Andres Freund wrote:
> Hi,
> On 2019-01-30 10:26:52 +0800, Hubert Zhang wrote:
> > Hi Michael, Robert
> > For you question about the hook position, I want to explain more about
> the
> > background why we want to introduc
between option1 and option2?
Again comments on other better hook positions are appreciated!
Thanks
Hubert
On Wed, Jan 30, 2019 at 10:26 AM Hubert Zhang wrote:
> Hi Michael, Robert
> For you question about the hook position, I want to explain more about the
> background why we want to
better hook positions recommend to solve the above user
case?
Thanks in advance.
Hubert
On Tue, Jan 22, 2019 at 12:08 PM Hubert Zhang wrote:
> > For this particular purpose, I don't immediately see why you need a
>> > hook in both places. If ReadBuffer is called with P_NEW, aren'
to extend, unlink, etc. depending on the storage type.
>>
>> > For this particular purpose, I don't immediately see why you need a
>> > hook in both places. If ReadBuffer is called with P_NEW, aren't we
>> > guaranteed to end up in smgrextend()?
>>
>> Yes, that's a bit awkward.
>> --
>> Michael
>
>
--
Thanks
Hubert Zhang
disk_quota_hooks_v3.patch
Description: Binary data
Postgres. We
update our patch in commitfest/21/1883
<https://commitfest.postgresql.org/21/1883/>. There is no reviewer yet.
Please help to review this patch if you are interest in diskquota
extension. Thanks in advance!
--
Thanks
Hubert Zhang
Both BufferExtendCheckPerms_hook_type and
>> SmgrStat_hook_type are imagining that they know what the hook does -
>> CheckPerms in the first case and Stat in the second case.
>>
>> For this particular purpose, I don't immediately see why you need a
>> hook in both places. If ReadBuffer is called with P_NEW, aren't we
>> guaranteed to end up in smgrextend()?
>>
>> --
>> Robert Haas
>> EnterpriseDB: http://www.enterprisedb.com
>> The Enterprise PostgreSQL Company
>>
>
>
>
--
Thanks
Hubert Zhang
AM Tomas Vondra
wrote:
> On Tue, 2018-11-13 at 16:47 +0800, Hubert Zhang wrote:
> > Hi all,
> >
> > We implement disk quota feature on Postgresql as an extension(link:
> > https://github.com/greenplum-db/diskquota),
> > If you are interested, try and use it to l
le, i.e. the owner of the temp table,
diakquota will treat it the same as normal tables and sum its table size to
its owner's quota. While for schema, temp table is located under namespace
'pg_temp_backend_id', so temp table size will not sum to the current
schema's qouta.
--
Thanks
Hubert Zhang, Haozhou Wang, Hao Wu, Jack WU
ery hard to find some way of solving this problem that
> > doesn't require reading data from a table that hasn't been committed
> > yet, because you are almost certainly not going to be able to make
> > that work reliably even if you are willing to write code in C.
>
> +1.
> --
> Michael
>
--
Thanks
Hubert Zhang
a?
--
Thanks
Hubert Zhang
Thanks a lot.
On Wed, Oct 17, 2018 at 11:21 PM Andres Freund wrote:
> Hi,
>
> On 2018-10-17 23:11:26 +0800, Hubert Zhang wrote:
> > The section "Share Memory and LWLocks" describe the AddinShmemInitLock
> which
> > is used to protect the ShmemInitStruct
bgworkers specific.
On Wed, Oct 17, 2018 at 7:51 PM Amit Kapila wrote:
> On Wed, Oct 17, 2018 at 3:49 PM Hubert Zhang wrote:
> >
> > Hi all,
> >
> > I want to init SHM in a background worker, which is supported in PG9.4.
> Also I need to use lwlock to protect t
inside worker
init code in PG 9.4?
--
Thanks
Hubert Zhang
not hard to modify, I don't think this should block the main
design of disk quota feature. Is there any comment on the design and
architecture? If no, we'll firstly submit our patch and involve more
discussion?
On Sat, Sep 22, 2018 at 3:03 PM Pavel Stehule
wrote:
>
>
> so 22. 9. 2018 v 8
limit
for the different role, schema or table instead of a single GUC value.
On Sat, Sep 22, 2018 at 11:17 AM Pavel Stehule
wrote:
>
>
> pá 21. 9. 2018 v 16:21 odesílatel Hubert Zhang napsal:
>
>> just fast reaction - why QUOTA object?
>>> Isn't ALTER SET enough?
>>> So
>
> pá 21. 9. 2018 v 13:32 odesílatel Hubert Zhang napsal:
>
>>
>>
>>
>>
>> *Hi all,We redesign disk quota feature based on the comments from Pavel
>> Stehule and Chapman Flack. Here are the new design.OverviewBasically, disk
>> quota feature is use
quota feature are appreciated.*
On Mon, Sep 3, 2018 at 12:05 PM, Pavel Stehule
wrote:
>
>
> 2018-09-03 3:49 GMT+02:00 Hubert Zhang :
>
>> Thanks Pavel.
>> Your patch did enforcement on storage level(md.c or we could also use
>> smgr_extend). It's straight forward
to collector after
transaction end(become idle).
As an enhancement, we also want to get the active table while the
transaction inserting the table is in progress. Delay is acceptable.
Is there any existing ways in PG to support it?
--
Thanks
Hubert Zhang
:
> Hi
>
> 2018-09-02 14:18 GMT+02:00 Hubert Zhang :
>
>> Thanks Chapman.
>> @Pavel, could you please explain more about your second suggestion
>> "implement
>> some quotas on storage level?"
>>
>
> See attached patch - it is very simple
rred, native feature or
extension as the POC?
-- Hubert
On Fri, Aug 31, 2018 at 3:32 AM, Pavel Stehule
wrote:
>
>
> 2018-08-30 16:22 GMT+02:00 Chapman Flack :
>
>> On 08/30/2018 09:57 AM, Hubert Zhang wrote:
>>
>> > 2 Keep one worker process for each database
is case.
Any better ideas on it?
--
Thanks
Hubert Zhang
Hello all.
background worker can use SPI to read a database, but it can call
BackgroundWorkerInitializeConnection(dbname) only once.
I wonder if there is a way to let a child process of postmaster to access
all the databases one by one?
--
Thanks
Hubert Zhang
Hi Heikki,
Not working on it now, you can go ahead.
On Fri, Jun 22, 2018 at 12:56 AM, Heikki Linnakangas
wrote:
> Hi Hubert,
>
> Are you working on this, or should I pick this up? Would be nice to get
> this done as soon as v12 development begins.
>
> - Heikki
>
--
Thanks
Hubert Zhang
). We still have a lot of
issues to make it production ready and share with more peoples. [Github
umbrella project](https://github.com/greenplum-db/plcontainer/projects/1)
If you are interested in it, feel free to try it. Your suggestion and
contribution will be appreciated.
--
Thanks
Hubert Zhang
e
e.g. prev_hook = cancel_hook; cancel_hook=my_hook; void
my_hook(){mywork(); (*prev_hook)();} )?
I didn't find any explicit hook list in PG code base, is that a good
practice?
-- Hubert
On Mon, May 14, 2018 at 6:40 PM, Heikki Linnakangas <hlinn...@iki.fi> wrote:
> On 14/05/18
, May 11, 2018 at 9:28 PM, Heikki Linnakangas <hlinn...@iki.fi> wrote:
>
>
> On 11 May 2018 10:01:56 EEST, Hubert Zhang <hzh...@pivotal.io> wrote:
> >2. Add a flag in hook function to indicate whether to call
> >Py_AddPendingCall.
> >This is straightforward.(I
_PG_init() for each extension. If follow this way, delete hook is not
needed.
Any comments?
On Thu, May 10, 2018 at 10:50 PM, Heikki Linnakangas <hlinn...@iki.fi>
wrote:
> On 10/05/18 09:32, Hubert Zhang wrote:
>
>> Hi all,
>>
>> I want to support canceling for a p
interruption
int added = Py_AddPendingCall(PLy_python_interruption_handler, NULL);
if (coreIntHandler) {
(*coreIntHandler)(sig);
}
}
Does anyone have some comments on this patch?
As for me, I think handler function should call PyErr_SetInterrupt()
instead of PyErr_SetString(PyExc_RuntimeError, "test ex
57 matches
Mail list logo