RRUPTED),
> + errmsg("index table contains empty page")));
>
>
> Do we want to give a separate message for EMPTY and NEW pages? Isn't
> it better that the same error message can be given for both of them as
> from user perspective there is not much difference between b
experimentation. The
>> detail of non-default GUC params and pgbench command are mentioned in
>> the result sheet. I also did the benchmarking with unique values at
>> 300 and 1000 scale factor and its results are provided in
>> 'results-unique-values-default-ff'.
>
Hi,
On Wed, Mar 22, 2017 at 8:41 AM, Amit Kapila wrote:
> On Tue, Mar 21, 2017 at 11:49 PM, Ashutosh Sharma
> wrote:
>>>
>>> I can confirm that that fixes the seg faults for me.
>>
>> Thanks for confirmation.
>>
>>>
>>> Did you me
rdAssemble, it
first adds all the data assciated with registered buffers into the WAL
record followed by the main data (). Hence, the WAL record in btree
and hash are organised differently.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
--
Sent via pgsql-hackers
roduce the issue on my local machine using the test script you
shared. Could you please check with the attached patch if you are
still seeing the issue. Thanks in advance.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
corrected_block_id_reference_in_hash_vac
On Mon, Mar 20, 2017 at 6:53 PM, Ashutosh Sharma wrote:
> On Mon, Mar 20, 2017 at 9:31 AM, Amit Kapila wrote:
>> On Sat, Mar 18, 2017 at 5:13 PM, Ashutosh Sharma
>> wrote:
>>> On Sat, Mar 18, 2017 at 1:34 PM, Amit Kapila
>>> wrote:
>>>> On S
LCKSZ);
Attached is the patch that corrects above comment. Thanks.
[1] -
https://www.postgresql.org/message-id/CAMkU%3D1y6NjKmqbJ8wLMhr%3DF74WzcMALYWcVFhEpm7i%3DmV%3DXsOg%40mail.gmail.com
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
corrected_comments_hash_alloc_buckets.
On Mon, Mar 20, 2017 at 9:31 AM, Amit Kapila wrote:
> On Sat, Mar 18, 2017 at 5:13 PM, Ashutosh Sharma
> wrote:
>> On Sat, Mar 18, 2017 at 1:34 PM, Amit Kapila wrote:
>>> On Sat, Mar 18, 2017 at 12:12 AM, Ashutosh Sharma
>>> wrote:
>>>> On Fri, M
and
ANALYZE. This may be
expanded in the future.
3) I think above needs to be rephrased. Something like...Currently,
the supported progress reporting commands are 'VACUUM' and
'ANALYZE'.
Moreover, I also ran your patch on Windows platform and didn't find
a
ECT viewname, definition FROM pg_views WHERE schemaname <>
'information_schema' ORDER BY viewname;
I am still reviewing your patch and doing some testing. Will update if
i find any issues. Thanks.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
On Fri, Mar 17, 2017 at 3:16 P
On Sat, Mar 18, 2017 at 1:34 PM, Amit Kapila wrote:
> On Sat, Mar 18, 2017 at 12:12 AM, Ashutosh Sharma
> wrote:
>> On Fri, Mar 17, 2017 at 10:54 PM, Jeff Janes wrote:
>>> While trying to figure out some bloating in the newly logged hash indexes,
>>> I'm lo
sn't it unhelpful to have the
> pageinspect module throw errors, rather than returning a dummy value to
> indicate there was an error?
Well, this is something not specific to hash index. So, I have no answer :)
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
On Fri, Mar 17, 2017 at 6:13 PM, Amit Kapila wrote:
> On Fri, Mar 17, 2017 at 12:27 PM, Ashutosh Sharma
> wrote:
>> On Fri, Mar 17, 2017 at 8:20 AM, Amit Kapila wrote:
>>
>> As I said in my previous e-mail, I think you need
>>> to record clearing of this flag
On Fri, Mar 17, 2017 at 9:03 AM, Amit Kapila wrote:
> On Thu, Mar 16, 2017 at 1:15 PM, Ashutosh Sharma
> wrote:
>> Hi,
>>
>> Attached is the patch that allows WAL consistency tool to mask
>> 'LH_PAGE_HAS_DEAD_TUPLES' flag in hash index. The flag got ad
On Fri, Mar 17, 2017 at 8:20 AM, Amit Kapila wrote:
> On Thu, Mar 16, 2017 at 9:39 PM, Ashutosh Sharma
> wrote:
>>>>
>>>
>>> Don't you think, we should also clear it during the replay of
>>> XLOG_HASH_DELETE? We might want to log the clear of f
DELETE? We might want to log the clear of flag along with
> WAL record for XLOG_HASH_DELETE.
>
Yes, it should be cleared. I completely missed this part in a hurry.
Thanks for informing. I have taken care of it in the attached v2
patch.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://w
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
On Wed, Mar 15, 2017 at 11:27 AM, Kuntal Ghosh
wrote:
> On Wed, Mar 15, 2017 at 12:32 AM, Robert Haas wrote:
>> On Mon, Mar 13, 2017 at 10:36 AM, Ashutosh Sharma
>> wrote:
>>> Couple of review c
On Thu, Mar 16, 2017 at 11:11 AM, Amit Kapila wrote:
> On Wed, Mar 15, 2017 at 9:23 PM, Ashutosh Sharma
> wrote:
>>
>>>
>>> Few other comments:
>>> 1.
>>> + if (ndeletable > 0)
>>> + {
>>> + /* N
On Mar 16, 2017 7:49 AM, "Robert Haas" wrote:
On Wed, Mar 15, 2017 at 4:31 PM, Robert Haas wrote:
> On Wed, Mar 15, 2017 at 3:54 PM, Ashutosh Sharma
wrote:
>> Changed as per suggestions. Attached v9 patch. Thanks.
>
> Wow, when do you sleep? Will have a lo
t; this needs reformatting, but it's oddly narrow.
Corrected.
>
> I suggest changing the header comment of
> hash_xlog_vacuum_get_latestRemovedXid like this:
>
> + * Get the latestRemovedXid from the heap pages pointed at by the index
> + * tuples being deleted. See also btree_xlog_del
On Wed, Mar 15, 2017 at 9:28 PM, Robert Haas wrote:
> On Wed, Mar 15, 2017 at 11:37 AM, Ashutosh Sharma
> wrote:
>>> +/* Get RelfileNode from relation OID */
>>> +rel = relation_open(htup->t_tableOid, NoLock);
>>> +rnode = rel->r
>
> I think one possibility is to get it using
> indexrel->rd_index->indrelid in _hash_doinsert().
>
Thanks. I have tried the same in the v7 patch shared upthread.
>
>>
>> But they're not called delete records in a hash index. The function
>> is called hash_xlog_vacuum_one_page. The correspondi
the comments needs some work.
Thanks for your that suggestion... I spent a lot of time thinking on
this and also had a small discussion with Amit but could not find any
issue with taking cleanup lock on modified page instead of primary
bucket page.I had to do some decent code changes for this. Atta
On Mar 14, 2017 5:37 PM, "Alvaro Herrera" wrote:
Ashutosh Sharma wrote:
> Yes. But, as i said earlier I am getting negative checksum value for
> page_header as well. Isn't that wrong. For eg. When I debug the
> following query, i could pd_checksum value as '4007
o share you the updated
> patch asap.
>
>>
>>
>> On Tue, Feb 14, 2017 at 8:27 AM, Ashutosh Sharma
>> wrote:
>>>
>>> 1) 0001-Rewrite-hash-index-scans-to-work-a-page-at-a-time.patch: this
>>> patch rewrites the hash index scan module to work in
AD_TUPLES' flag which got added as a part of
Microvacuum patch is attached with this mail.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
On Wed, Feb 1, 2017 at 10:30 AM, Michael Paquier
wrote:
> On Sat, Jan 28, 2017 at 8:02 PM, Amit Kapila wrote:
>> On
(0/304EDE0,-25462,1,220,7432,8192,8192,4,0)
(1 row)
I think pd_checksum in PageHeaderData is defined as uint16 (0 to
65,535) whereas in SQL function for page_header it is defined as
smallint (-32768 to +32767).
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
Q87rKYzmYQ%40mail.gmail.com
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
On Wed, Mar 8, 2017 at 2:32 PM, Kuntal Ghosh wrote:
> On Fri, Mar 3, 2017 at 9:44 AM, Amit Kapila wrote:
>> On Tue, Feb 28, 2017 at 11:06 AM, Kuntal Ghosh
>> wrote:
>>> Hel
btree_index"
postgres=# SELECT * FROM bt_page_items('btree_index', 1024) LIMIT 1;
ERROR: block number out of range
5) Code duplication in bt_page_items() and bt_page_items_bytea() needs
to be handled.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
ving so many INSERT statements in test.sql
file. I think it would be better to replace it with single SQL
statement. Thanks.
[1]-
https://www.postgresql.org/message-id/CAA4eK1KibVzgVETVay0%2BsiVEgzaXnP5R21BdWiK9kg9wx2E40Q%40mail.gmail.com
[2]-
https://www.postgresql.org/message-id/CAE9k0PkRSyzx8d
commit.
Amit Langote, reviewed by David Fetter
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
20606.288995
[1] - https://msdn.microsoft.com/en-IN/library/ms190730.aspx
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
On Thu, Feb 23, 2017 at 12:59 PM, Tsunakawa, Takayuki
wrote:
> From: Amit Kapila [mailto:amit.kapil...@gmail.com]
>> > Hmm, the large-page require
On Tue, Mar 7, 2017 at 11:18 AM, Amit Kapila wrote:
> On Tue, Mar 7, 2017 at 10:22 AM, Ashutosh Sharma
> wrote:
>>> I also think that commit didn't intend to change the behavior,
>>> however, the point is how sensible is it to keep such behavior after
>>> P
: macro
redefinition
c:\users\ashu\postgresql\src\include\pg_config_manual.h20
Apart from these, I am not having any comments as of now. I am still
validating the patch on Windows. If I find any issues i will update
it.
--
With Regards,
Ashutosh Sharma.
EnterpriseDB: http://www.enterprisedb.co
f-condition
is satisfied.
*if (heap_pages < (BlockNumber) min_parallel_table_scan_size &&
index_pages < (BlockNumber) min_parallel_index_scan_size &&
rel->reloptkind == RELOPT_BASEREL)return 0;*
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
s not aware of Parallel Append. Thanks.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
ving zero heap pages didn't get parallel
workers other childRels that was good enough to go for Parallel Seq
Scan had to go for normal seq scan which could be costlier.
Fix:
Attached is the patch that fixes this issue.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterpri
On Wed, Mar 1, 2017 at 5:29 PM, Simon Riggs wrote:
>
> On 1 March 2017 at 04:50, Ashutosh Sharma wrote:
>>
>> On Tue, Feb 28, 2017 at 11:44 PM, Simon Riggs wrote:
>>>
>>> On 28 February 2017 at 11:34, Ashutosh Sharma wrote:
>>>
>>
On Tue, Feb 28, 2017 at 11:44 PM, Simon Riggs wrote:
> On 28 February 2017 at 11:34, Ashutosh Sharma
> wrote:
>
>
>> So, Here are the pgbench results I got with '
>> *reduce_pgxact_access_AtEOXact.v2.patch*' on a read-write workload.
>>
>
> Tha
Hi,
On Fri, Feb 24, 2017 at 12:22 PM, Ashutosh Sharma
wrote:
On Fri, Feb 24, 2017 at 10:41 AM, Simon Riggs wrote:
> > On 24 February 2017 at 04:41, Ashutosh Sharma
> wrote:
> >>
> >> Okay. As suggested by Alexander, I have changed the order of reading and
> &g
On Fri, Feb 24, 2017 at 10:41 AM, Simon Riggs wrote:
> On 24 February 2017 at 04:41, Ashutosh Sharma wrote:
>>
>> Okay. As suggested by Alexander, I have changed the order of reading and
>> doing initdb for each pgbench run. With these changes, I got following
>> resul
) CPU E7- 8830 @ 2.13GHz
Also, Excel sheet (results-readwrite-300-SF) containing the results for all
the 3 runs is attached.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:*http://www.enterprisedb.com <http://www.enterprisedb.com/>*
On Thu, Feb 23, 2017 at 2:44 AM, Alvaro Herrera
wrote
On Tue, Feb 21, 2017 at 5:52 PM, Alexander Korotkov
wrote:
> On Tue, Feb 21, 2017 at 2:37 PM, Andres Freund wrote:
>>
>> On 2017-02-21 16:57:36 +0530, Ashutosh Sharma wrote:
>> > Yes, there is still some regression however it has come down by a
>> > small margi
On Tue, Feb 21, 2017 at 4:21 PM, Alexander Korotkov
wrote:
>
> Hi, Ashutosh!
> On Mon, Feb 20, 2017 at 1:20 PM, Ashutosh Sharma
> wrote:
>>
>> Following are the pgbench results for read-write workload, I got with
>> pgxact-align-3 patch. The results are fo
Thanks for reporting it. This is because of incorrect data typecasting.
Attached is the patch that fixes this issue.
On Tue, Feb 21, 2017 at 2:58 PM, Mithun Cy
wrote:
> On Fri, Feb 10, 2017 at 1:06 AM, Robert Haas
> wrote:
>
> > Alright, committed with a little further hacking.
>
> I did pull t
gt;> point. Should be simple enough in gnuplot ...
> >> >
> >> > Good point.
> >> > Please find graph of mean and errors in attachment.
> >>
> >> So ... no difference?
> >
> >
> > Yeah, nothing surprising. It's just another
>> > Good point.
>> > Please find graph of mean and errors in attachment.
>>
>> So ... no difference?
>
>
> Yeah, nothing surprising. It's just another graph based on the same data.
> I wonder how pgxact-align-3 would work on machine of Ashutosh Sharma
Hi All,
I too have performed benchmarking of this patch on a large machine
(with 128 CPU(s), 520GB RAM, intel x86-64 architecture) and would like
to share my observations for the same (Please note that, as I had to
reverify readings on few client counts, it did take some time for me
to share these
> FWIW it might be interesting to have comparable results from the same
> benchmarks I did. The scripts available in the git repositories should not
> be that hard to tweak. Let me know if you're interested and need help with
> that.
>
Sure, I will have a look into those scripts once I am done wit
Hi,
I am currently testing this patch on a large machine and will share the
test results in few days of time.
Please excuse any grammatical errors as I am using my mobile device. Thanks.
On Feb 11, 2017 04:59, "Tomas Vondra" wrote:
> Hi,
>
> As discussed at the Developer meeting ~ a week ago,
> I think you should just tighten up the sanity checking in the existing
> function _hash_ovflblkno_to_bitno rather than duplicating the code. I
> don't think it's called often enough for one extra (cheap) test to be
> an issue. (You should change the elog in that function to an ereport,
> too, s
On Wed, Feb 8, 2017 at 11:26 PM, Robert Haas wrote:
> On Wed, Feb 8, 2017 at 11:58 AM, Ashutosh Sharma
> wrote:
>>> And then, instead, you need to add some code to set bit based on the
>>> bitmap page, like what you have:
>>>
>>> +mapbuf = _ha
>> 1) Check if an overflow page is a new page. If so, read a bitmap page
>> to confirm if a bit corresponding to this overflow page is clear or
>> not. For this, I would first add Assert statement to ensure that the
>> bit is clear and if it is, then set the statusbit as false indicating
>> that th
e negate all the bits. Then if somebody picked the
> wrong macro it would actually fail. I'm not sure that's really the
> best place to spend our effort, though. The moral of this episode is
> that it's important to at least get the right width. Currentl
ds, we won't have any zero
pages in Hash Indexes so I don't think we need to have column showing
zero pages (zero_pages). When working on WAL in hash indexes, we found
that WAL routine 'XLogReadBufferExtended' does not expect a page to be
completely zero page else it returns Inval
> As far as I can tell, the hash_bitmap_info() function is doing
> something completely ridiculous. One would expect that the purpose of
> this function was to tell you about the status of pages in the bitmap.
> The documentation claims that this is what the function does: it
> claims that this fu
On Sat, Jan 28, 2017 at 10:25 PM, Ashutosh Sharma wrote:
> Hi,
>
> Please find my reply inline.
>
>> In hash_bimap_info(), we go to the trouble of creating a raw page but
>> never do anything with it. I guess the idea here is just to error out
>> if the supplied
On Fri, Feb 3, 2017 at 8:29 PM, Robert Haas wrote:
> On Fri, Feb 3, 2017 at 7:41 AM, Ashutosh Sharma wrote:
>> I think UInt32GetDatum(metad->hashm_procid) looks fine, the reason
>> being 'hashm_procid' is defined as 32-bit unsigned integer but then we
>> may ha
>> I think it's OK to check hash_bitmap_info() or any other functions
>> with different page types at least once.
>>
>> [1]-
>> https://www.postgresql.org/message-id/CA%2BTgmoZUjrVy52TUU3b_Q5L6jcrt7w5R4qFwMXdeCuKQBmYWqQ%40mail.gmail.com
>
> Sure, I just don't know if we need to test them 4 or 5 ti
; I'm not going to fight too hard if Peter wants it that way.
>
I think it's OK to check hash_bitmap_info() or any other functions
with different page types at least once.
[1]-
https://www.postgresql.org/message-id/CA%2BTgmoZUjrVy52TUU3b_Q5L6jcrt7w5R4qFwMXdeCuKQBmYWqQ%40mail.gmai
w.postgresql.org/message-id/CAE9k0Pke046HKYfuJGcCtP77NyHrun7hBV-v20a0TW4CUg4H%2BA%40mail.gmail.com
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
t32GetDatum(), not UInt32GetDatum() or anything else.
> If the SQL type is bool, you should be using BoolGetDatum().
Sorry to mention but I didn't find any SQL datatype equivalent to
uint32 or uint16 in 'C'. So, currently i am using int4 for uint16 and
int8 for uint32.
> A
Hi,
Please find my reply inline.
> In hash_bimap_info(), we go to the trouble of creating a raw page but
> never do anything with it. I guess the idea here is just to error out
> if the supplied page number is not an overflow page, but there are no
> comments explaining that. Anyway, I'm not su
Shared Buffer= 1GB
Client counts = 64
run time duration = 30 mins
read-write workload.
./pgbench -c $threads -j $threads -T $time_for_reading -M prepared
postgres -f /home/ashu/deadlock_report
I hope this makes the things clear now and if there is no more
concerns it can be moved to 'Re
>> Secondly, we will have to input overflow block number as an input to
>> this function so as to determine the overflow bit number which can be
>> used further to identify the bitmap page.
>>
>
> I think you can get that from bucket number by using BUCKET_TO_BLKNO.
> You can get bucket number from
right to call it from
contrib modules like pgstattuple where we are just trying to read the
tables.
--
With Regards,
Ashutosh Sharma
EnterpriseDB:http://www.enterprisedb.com
0001-Add-pgstathashindex-to-pgstattuple-extension-v5.patch
Description: invalid/octet-stream
--
Sent via pgsql-hackers ma
Jan 20 20:29:53 2017 -0500
Move some things from builtins.h to new header files
On Thu, Jan 19, 2017 at 12:27 PM, Ashutosh Sharma wrote:
>> However, I've some minor comments on the patch:
>>
>> +/*
>> + * HASH_ALLOCATABLE_PAGE_SZ represents allocatable
>&
uot;
\set: error while setting variable
Above error message should also have some expected values with it.
Please note that I haven't gone through the entire mail chain so not
sure if above thoughts have already been raised by others. Sorry about
that.
With Regards,
Ashutosh Sharma
EnterpriseDB:
, It would be great if you could confirm as if you have been
getting this issue repeatedly. Thanks.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.pos
index-in-pageinspect-contrib-mo_v7.patch:38:
trailing whitespace.
pageinspect--1.0--1.1.sql pageinspect--unpackaged--1.0.sql
pgindent:
-
./src/tools/pgindent/pgindent contrib/pageinspect/hashfuncs.c
On Wed, Jan 18, 2017 at 9:15 PM, Jesper Pedersen
wrote:
> Hi,
>
>
> On 01/1
d(rel, MAIN_FORKNUM, blkno,
> RBM_NORMAL, NULL);
> Use BAS_BULKREAD strategy to read the buffer.
>
okay, corrected. Please check the attached v3 patch with corrections.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
0001-Add-pgstathashindex-to-pgstattuple-extension-v3.patc
t; ..
> }
>
> Don't you think we should free the allocated memory in this function?
> Also, why are you 5 as a multiplier in both the above pallocs,
> shouldn't it be 4?
Yes, we should free it. We have used 5 as a multiplier instead of 4
because of ' ' charact
ary? I think this was copied from btreefuncs, but there
> is no buffer release in this code.
Yes, it was copied from btreefuncs and is not required in this case as
we are already passing raw_page as an input to hash_page_items. I have
taken care of it in the updated patch shared up thread
Hi,
> +values[j++] = UInt16GetDatum(uargs->offset);
> +values[j++] = CStringGetTextDatum(psprintf("(%u,%u)",
> +
> BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
> +itup->t_tid.ip_posid));
> +
> +ptr = (char *) itup + IndexInfoFi
f check' to ensure that we do not go
beyond the page size while reading tuples.
ptr = (char *) itup + IndexInfoFindDataOffset(itup->t_info);
+ if (ptr > page + BLCKSZ)
+ /* Error */
dlen = IndexTupleSize(itup) - IndexInfoFindDataOffset(itup->t_info);
Meanwhile, I am workin
d v5 patch that fixes the issue.
>
> Also, the src/backend/access/README file should be updated with a
> description of the changes which this patch provides.
okay, I have updated the insertion algorithm in the README file.
--
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www
with other database objects. Other than that,
I feel the patch looks good and has no bug.
--
With Regards,
Ashutosh Sharma.
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
t; +
> Please remove extra spaces.
Done. Please refer to v2 patch.
>
> And, please add some test cases for regression tests.
>
Added a test-case. Please check v2 patch attached with this mail.
--
With Regards,
Ashutosh Sharma.
EnterpriseDB: http://www.enterprisedb.com
0001-Add-p
ere as existing
> installations will use the upgrade scripts.
>
> Hence I don't see a reason why we should keep pageinspect--1.5.sql around
> when we can provide a clear interface description in a pageinspect--1.6.sql.
okay, agreed.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enter
ttached v4 patch fixes this assertion failure.
> BTW, better rename 'hashkillitems' to '_hash_kill_items' to follow the
> naming convention in hash.h
okay, I have renamed 'hashkillitems' to '_hash_kill_items'. Please
check the attached v4 patch.
With Reg
aving a
test-case for hash index. Infact I will try to modify an already
existing patch by Peter.
[1]-https://www.postgresql.org/message-id/bcf8c21b-702e-20a7-a4b0-236ed2363d84%402ndquadrant.com
--
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via
e changed the status of this patch to "Needs review" for
this commit-fest.
[1]
https://www.postgresql.org/message-id/a751842f-2aed-9f2e-104c-34cfe06bfbe2%40redhat.com
With Regards,
Ashutosh Sharma.
EnterpriseDB: http://www.enterprisedb.com
microvacuum_hash_index_v3.patch
Description: i
ve thought of introducing this function. Attached is the patch for
the same. Please have a look and let me your feedback. I would also
like to mention that this idea basically came from my colleague Kuntal
Ghosh and i implemented it. I have also created a commit-fest entry
for this submission. T
Hi Jesper,
> I was planning to submit a new version next week for CF/January, so I'll
> review your changes with the previous feedback in mind.
>
> Thanks for working on this !
As i was not seeing any updates from you for last 1 month, I thought
of working on it. I have created a commit-fest entr
Hi All,
I have spent some time in reviewing the latest v8 patch from Jesper
and could find some issues which i would like to list down,
1) Firstly, the DATA section in the Makefile is referring to
"pageinspect--1.6.sql" file and currently this file is missing.
+DATA = pageinspect--1.6.sql pagein
> It is fine as per current usage of WaitEventSet API's, however,
> Robert's point is valid that user is not obliged to call
> ModifyWaitEvent before WaitEventSetWait. Imagine a case where some
> new user of this API is calling WaitEventSetWait repeatedly without
> calling ModifyWaitEvent.
Oops!
a new variable. I think
that is just because we are trying to avoid the events for SOCKET from
being re-created again and again. So, for me i think Amit's fix is
absolutely fine and is restricted to Windows. Please do correct me if
my point is wrong. Thank you.
With Regards,
Ashutosh Sharma
Enterp
Hi Micheal,,
> I have just read the patch, and hardcoding the array position for a
> socket event to 0 assumes that the socket event will be *always* the
> first one in the list. but that cannot be true all the time, any code
> that does not register the socket event as the first one in the list
>
Hi Micheal,
> I bet that this patch breaks many things for any non-WIN32 platform.
It seems to me like you have already identified some issues when
testing. If yes, could please share it. I have tested my patch on
CentOS-7 and Windows-7 machines and have found no issues. I ran all
the regression
Hi,
> Okay, but I think we need to re-enable the existing event handle for
> required event (FD_READ) by using WSAEventSelect() to make it work
> sanely. We have tried something on those lines and it resolved the
> problem. Ashutosh will post a patch on those lines later today. Let
> us know if
suggestions or inputs would be appreciated.
On Tue, Dec 13, 2016 at 9:34 PM, Ashutosh Sharma wrote:
> Hi Micheal,
>
>>
>> Ashutosh, could you try it and see if it improves things?
>> -
>
> Thanks for your patch. I would like to inform you that I didn't find any
>
Hi Micheal,
> Ashutosh, could you try it and see if it improves things?
> -
>
Thanks for your patch. I would like to inform you that I didn't find any
improvement with your patch.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
03:16:09PM +0530, Ashutosh Sharma wrote:
> > Problem Analysis:
> > -
> > Allthough i am very new to Windows, i tried debugging the issue and
> > could find that Backend is not receiving the query executed after
> > "SELECT pldbg_attach_to
0x19 bytes C
postgres.exe!mainCRTStartup() Line 371 C
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
latest commit in master
branch and could not reproduce here as well. Amit (included in this email
thread) has also tried it once and he was also not able to reproduce it.
Could you please let me know if there is something more that needs to be
done in order to reproduce it other than what you have
to share you a next version of patch for
supporting microvacuum in hash index.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
e an opinion from
> committer or others as well before adding this target. What do you
> say?
Ok. We can do that.
I have verified the updated v2 patch. The patch looks good to me. I am
marking it as ready for committer. Thanks.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.c
Hi,
> What about the patch attached to make things more consistent?
I have reviewed this patch. It does serve the purpose and looks sane
to me. I am marking it as ready for committer.
With Regards,
Ashutosh Sharma
EnterpriseDB: http://www.enterprisedb.com
--
Sent via pgsql-hackers mail
Hi,
I have started with the review for this patch and would like to share
some of my initial review comments that requires author's attention.
1) I am getting some trailing whitespace errors when trying to apply
this patch using git apply command. Following are the error messages i
am getting.
[
Hi,
> While replaying the delete/vacuum record on standby, it can conflict
> with some already running queries. Basically the replay can remove
> some row which can be visible on standby. You need to resolve
> conflicts similar to what we do in btree delete records (refer
> btree_xlog_delete).
101 - 200 of 230 matches
Mail list logo