Re: 32TB relation size make mdnblocks overflow
I think we must meet some corner cases about our storage. The relation has 32TB blocks, so 'mdnblocks' gets the unexpected value, we will check it again. Thanks a lot.
Re: 32TB relation size make mdnblocks overflow
Julien Rouhaud writes: > On Tue, Jan 18, 2022 at 02:21:14PM +0800, 陈佳昕(步真) wrote: >> We know that PostgreSQL doesn't support a single relation size over 32TB, >> limited by the MaxBlockNumber. But if we just 'insert into' one relation over >> 32TB, it will get an error message 'unexpected data beyond EOF in block 0 of >> relation' in ReadBuffer_common. > I didn't try it but this is supposed to be caught by mdextend(): > ... > Didn't you hit this? Probably not, if the OP was testing something predating 8481f9989, ie anything older than the latest point releases. (This report does seem to validate my comment in the commit log that "I think it might confuse ReadBuffer's logic for data-past-EOF later on". I'd not bothered to build a non-assert build to check that, but this looks about like what I guessed would happen.) regards, tom lane
Re: 32TB relation size make mdnblocks overflow
Hi, On Tue, Jan 18, 2022 at 02:21:14PM +0800, 陈佳昕(步真) wrote: > > We know that PostgreSQL doesn't support a single relation size over 32TB, > limited by the MaxBlockNumber. But if we just 'insert into' one relation over > 32TB, it will get an error message 'unexpected data beyond EOF in block 0 of > relation' in ReadBuffer_common. The '0 block' is from mdnblocks function > where the segment number is over 256 and make segno * RELSEG_SIZE over > uint32's max value. So is it necessary to make the error message more > readable like 'The relation size is over max value ...' and elog in > mdnblocks? I didn't try it but this is supposed to be caught by mdextend(): /* * If a relation manages to grow to 2^32-1 blocks, refuse to extend it any * more --- we mustn't create a block whose number actually is * InvalidBlockNumber. (Note that this failure should be unreachable * because of upstream checks in bufmgr.c.) */ if (blocknum == InvalidBlockNumber) ereport(ERROR, (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), errmsg("cannot extend file \"%s\" beyond %u blocks", relpath(reln->smgr_rnode, forknum), InvalidBlockNumber))); Didn't you hit this?
32TB relation size make mdnblocks overflow
Hello We know that PostgreSQL doesn't support a single relation size over 32TB, limited by the MaxBlockNumber. But if we just 'insert into' one relation over 32TB, it will get an error message 'unexpected data beyond EOF in block 0 of relation' in ReadBuffer_common. The '0 block' is from mdnblocks function where the segment number is over 256 and make segno * RELSEG_SIZE over uint32's max value. So is it necessary to make the error message more readable like 'The relation size is over max value ...' and elog in mdnblocks? This scene we met is as below, 'shl $0x18, %eax' make $ebx from 256 to 0, which makes segno from 256 to 0. 0x00c2cc51 <+289>: callq 0xc657f0 0x00c2cc56 <+294>: mov-0x8(%r15),%rdi 0x00c2cc5a <+298>: mov%r15,%rsi 0x00c2cc5d <+301>: mov%eax,%r14d 0x00c2cc60 <+304>: mov0x10(%rdi),%rax 0x00c2cc64 <+308>: callq *0x8(%rax) 0x00c2cc67 <+311>: test %r14d,%r14d 0x00c2cc6a <+314>: jns0xc2cd68 => 0x00c2cc70 <+320>: add$0x28,%rsp 0x00c2cc74 <+324>: mov%ebx,%eax 0x00c2cc76 <+326>: shl$0x18,%eax 0x00c2cc79 <+329>: pop%rbx 0x00c2cc7a <+330>: pop%r12 0x00c2cc7c <+332>: pop%r13 0x00c2cc7e <+334>: pop%r14 0x00c2cc80 <+336>: pop%r15 0x00c2cc82 <+338>: pop%rbp 0x00c2cc83 <+339>: retq