UNCLASSIFIED

Hi all,

We seem to have tripped a bug in HDF 1.8.14, unless it is an intentional 
limitation?

We have a scalar dataset with type VLEN { STRUCT { 
double-precision-floating-point, C-string } }. The memory space uses 24 bytes 
of storage per element while the file space uses 12 for this format, and so a 
"struct(no-opt)" conversion path is selected internally by HDF for reads and 
writes.

We initially write 100x { number, string } pairs into the dataset, and that has 
no problems.

We then *over-write* with 99x { number, string } pairs. This causes an error 
when, having actually completed the write of the new 99x elements internally, 
the following code is run from H5Tconv.c:3371...

                        if(!noop_conv) {
                            /* For nested VL case, free leftover heap objects 
from the deeper level if the length of new data elements is shorter than the 
old data elements.*/
                            if(nested && seq_len < bg_seq_len) {
                                size_t parent_seq_len;
                                const uint8_t *tmp;
                                size_t u;

                                /* TMP_P is reset each time in the loop because 
DST_BASE_SIZE may include some data in addition to VL info. - SLU */
                                for(u = seq_len; u < bg_seq_len; u++) {
                                    tmp = (uint8_t *)tmp_buf + u * 
dst_base_size;
                                    UINT32DECODE(tmp, parent_seq_len);
                                    if(parent_seq_len > 0) {
                                        H5F_addr_decode(dst->shared->u.vlen.f, 
&tmp, &(parent_hobjid.addr));
                                        UINT32DECODE(tmp, parent_hobjid.idx);
                                        if(H5HG_remove(dst->shared->u.vlen.f, 
dxpl_id, &parent_hobjid) < 0)
                                            HGOTO_ERROR(H5E_DATATYPE, 
H5E_WRITEERROR, FAIL, "Unable to remove heap object")
                                    } /* end if */
                                } /* end for */
                            } /* end if */
                        } /* end if */

Conceptually this makes sense that remaining unused elements have their nested 
VLEN parts freed. However, the pointer arithmetic for "tmp" points to the start 
of the { number, string } PAIR... and so UINT32DECODE fills the parent_seq_len 
variable not with the length of the string (the nested VLEN), but with the 
first 4 bytes of the floating-point number in the structure. This seems to be a 
bug; it seems as if there should be some generic processing here, similar to 
how H5T_convert() is called recursively to prepare the write, as there may be 
more than one VLEN nested part?

I was wondering whether HDF is currently supposed to support VLEN-STRUCT-VLEN 
nested types in scalar datasets?

Thanks in advance,
Mark Hodson


IMPORTANT: This email remains the property of the Department of Defence and is 
subject to the jurisdiction of section 70 of the Crimes Act 1914. If you have 
received this email in error, you are requested to contact the sender and 
delete the email.

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to