On Thu, Dec 22, 2016 at 8:44 PM, Kohei KaiGai wrote:
> 2016-12-23 8:23 GMT+09:00 Robert Haas :
>> On Wed, Dec 7, 2016 at 10:44 PM, Kohei KaiGai wrote:
Handling objects >1GB at all seems like the harder part of the
2016-12-23 8:24 GMT+09:00 Robert Haas :
> On Wed, Dec 7, 2016 at 11:01 PM, Kohei KaiGai wrote:
>> Regardless of the ExpandedObject, does the flatten format need to
>> contain fully flatten data chunks?
>
> I suspect it does, and I think that's why this
2016-12-23 8:23 GMT+09:00 Robert Haas :
> On Wed, Dec 7, 2016 at 10:44 PM, Kohei KaiGai wrote:
>>> Handling objects >1GB at all seems like the harder part of the
>>> problem.
>>>
>> I could get your point almost. Does the last line above mention about
On Wed, Dec 7, 2016 at 11:01 PM, Kohei KaiGai wrote:
> Regardless of the ExpandedObject, does the flatten format need to
> contain fully flatten data chunks?
I suspect it does, and I think that's why this isn't going to get very
far without a super-varlena format.
--
On Wed, Dec 7, 2016 at 10:44 PM, Kohei KaiGai wrote:
>> Handling objects >1GB at all seems like the harder part of the
>> problem.
>>
> I could get your point almost. Does the last line above mention about
> amount of the data object >1GB? even if the "super-varlena" format
>
2016-12-08 16:11 GMT+09:00 Craig Ringer :
> On 8 December 2016 at 12:01, Kohei KaiGai wrote:
>
>>> At a higher level, I don't understand exactly where such giant
>>> ExpandedObjects would come from. (As you point out, there's certainly
>>> no easy way
On 8 December 2016 at 12:01, Kohei KaiGai wrote:
>> At a higher level, I don't understand exactly where such giant
>> ExpandedObjects would come from. (As you point out, there's certainly
>> no easy way for a client to ship over the data for one.) So this feels
>> like a
On 8 December 2016 at 07:36, Tom Lane wrote:
> Likewise, the need for clients to be able to transfer data in chunks
> gets pressing well before you get to 1GB. So there's a lot here that
> really should be worked on before we try to surmount that barrier.
Yeah. I tend to
2016-12-08 8:36 GMT+09:00 Tom Lane :
> Robert Haas writes:
>> On Wed, Dec 7, 2016 at 8:50 AM, Kohei KaiGai wrote:
>>> I like to propose a new optional type handler 'typserialize' to
>>> serialize an in-memory varlena structure (that
2016-12-08 8:04 GMT+09:00 Robert Haas :
> On Wed, Dec 7, 2016 at 8:50 AM, Kohei KaiGai wrote:
>> I like to propose a new optional type handler 'typserialize' to
>> serialize an in-memory varlena structure (that can have indirect
>> references) to
I wrote:
> Maybe. I think where KaiGai-san is trying to go with this is being
> able to turn an ExpandedObject (which could contain very large amounts
> of data) directly into a toast pointer or vice versa. There's nothing
> really preventing a TOAST OID from having more than 1GB of data
>
Robert Haas writes:
> On Wed, Dec 7, 2016 at 8:50 AM, Kohei KaiGai wrote:
>> I like to propose a new optional type handler 'typserialize' to
>> serialize an in-memory varlena structure (that can have indirect
>> references) to on-disk format.
> I
On Wed, Dec 7, 2016 at 8:50 AM, Kohei KaiGai wrote:
> I like to propose a new optional type handler 'typserialize' to
> serialize an in-memory varlena structure (that can have indirect
> references) to on-disk format.
> If any, it shall be involced on the head of
On 12/7/16 5:50 AM, Kohei KaiGai wrote:
If and when this structure is fetched from the tuple, its @ptr_block
is initialized to NULL. Once it is supplied to a function which
references a part of blocks, type specific code can load sub-matrix
from the toast relation, then update the @ptr_block not
This is a design proposal for matrix data type which can be larger
than 1GB. Not only a new data type support, it also needs a platform
enhancement because existing varlena has a hard limit (1GB).
We had a discussion about this topic on the developer unconference
at Tokyo/Akihabara, the day before
15 matches
Mail list logo