Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package hdf5 for openSUSE:Factory checked in 
at 2022-11-16 15:43:26
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/hdf5 (Old)
 and      /work/SRC/openSUSE:Factory/.hdf5.new.1597 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "hdf5"

Wed Nov 16 15:43:26 2022 rev:80 rq:1035905 version:1.12.2

Changes:
--------
--- /work/SRC/openSUSE:Factory/hdf5/hdf5.changes        2022-09-21 
14:44:04.194018080 +0200
+++ /work/SRC/openSUSE:Factory/.hdf5.new.1597/hdf5.changes      2022-11-16 
15:43:38.603892655 +0100
@@ -1,0 +2,35 @@
+Tue Nov 15 04:52:12 UTC 2022 - Atri Bhattacharya <badshah...@gmail.com>
+
+- Add to specfile missing patch:
+  Fix-error-message-not-the-name-but-the-link-information-is-parsed.patch
+
+-------------------------------------------------------------------
+Sat Oct 15 13:29:22 UTC 2022 - Egbert Eich <e...@suse.com>
+
+- Fix CVEs:
+  * CVE-2021-46244 (bsc#1195215)
+    Compound-datatypes-may-not-have-members-of-size-0.patch
+  * CVE-2018-13867 (bsc#1101906)
+    Validate-location-offset-of-the-accumulated-metadata-when-comparing.patch
+  * CVE-2018-16438 (bsc#1107069)
+    Make-sure-info-block-for-external-links-has-at-least-3-bytes.patch
+  * CVE-2020-10812 (bsc#1167400)
+    Hot-fix-for-CVE-2020-10812.patch
+  * CVE-2021-45830 (bsc#1194375)
+    H5O_fsinfo_decode-Make-more-resilient-to-out-of-bounds-read.patch
+  * CVE-2019-8396 (bsc#1125882)
+    H5O__pline_decode-Make-more-resilient-to-out-of-bounds-read.patch
+  * CVE-2018-11205 (bsc#1093663)
+    
Pass-compact-chunk-size-info-to-ensure-requested-elements-are-within-bounds.patch
+  * CVE-2021-46242 (bsc#1195212)
+    When-evicting-driver-info-block-NULL-the-corresponding-entry.patch
+  * CVE-2021-45833 (bsc#1194366)
+    Report-error-if-dimensions-of-chunked-storage-in-data-layout-2.patch
+  * CVE-2018-14031 (bsc#1101475)
+    
H5O_dtype_decode_helper-Parent-of-enum-needs-to-have-same-size-as-enum-itself.patch
+  * CVE-2018-17439 (bsc#1111598)
+    
H5IMget_image_info-H5Sget_simple_extent_dims-does-not-exceed-array-size.patch
+- Fix an error message:
+    Fix-error-message-not-the-name-but-the-link-information-is-parsed.patch
+
+-------------------------------------------------------------------

New:
----
  Compound-datatypes-may-not-have-members-of-size-0.patch
  Fix-error-message-not-the-name-but-the-link-information-is-parsed.patch
  H5IMget_image_info-H5Sget_simple_extent_dims-does-not-exceed-array-size.patch
  H5O__pline_decode-Make-more-resilient-to-out-of-bounds-read.patch
  
H5O_dtype_decode_helper-Parent-of-enum-needs-to-have-same-size-as-enum-itself.patch
  H5O_fsinfo_decode-Make-more-resilient-to-out-of-bounds-read.patch
  Hot-fix-for-CVE-2020-10812.patch
  Make-sure-info-block-for-external-links-has-at-least-3-bytes.patch
  
Pass-compact-chunk-size-info-to-ensure-requested-elements-are-within-bounds.patch
  Report-error-if-dimensions-of-chunked-storage-in-data-layout-2.patch
  Validate-location-offset-of-the-accumulated-metadata-when-comparing.patch
  When-evicting-driver-info-block-NULL-the-corresponding-entry.patch

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ hdf5.spec ++++++
--- /var/tmp/diff_new_pack.jOK1jW/_old  2022-11-16 15:43:39.451895730 +0100
+++ /var/tmp/diff_new_pack.jOK1jW/_new  2022-11-16 15:43:39.459895759 +0100
@@ -1,5 +1,5 @@
 #
-# spec file
+# spec file for package hdf5
 #
 # Copyright (c) 2022 SUSE LLC
 #
@@ -436,8 +436,21 @@
 # Could be ported but it's unknown if it's still needed
 Patch7:         hdf5-mpi.patch
 Patch8:         Disable-phdf5-tests.patch
+Patch9:         
Fix-error-message-not-the-name-but-the-link-information-is-parsed.patch
 # Imported from Fedora, strip flags from h5cc wrapper
 Patch10:        hdf5-wrappers.patch
+Patch101:       
H5O_fsinfo_decode-Make-more-resilient-to-out-of-bounds-read.patch
+Patch102:       
H5O__pline_decode-Make-more-resilient-to-out-of-bounds-read.patch
+Patch103:       
H5O_dtype_decode_helper-Parent-of-enum-needs-to-have-same-size-as-enum-itself.patch
+Patch104:       
Report-error-if-dimensions-of-chunked-storage-in-data-layout-2.patch
+Patch105:       
When-evicting-driver-info-block-NULL-the-corresponding-entry.patch
+Patch106:       
Pass-compact-chunk-size-info-to-ensure-requested-elements-are-within-bounds.patch
+Patch107:       
Validate-location-offset-of-the-accumulated-metadata-when-comparing.patch
+Patch108:       
Make-sure-info-block-for-external-links-has-at-least-3-bytes.patch
+Patch109:       Hot-fix-for-CVE-2020-10812.patch
+Patch110:       Compound-datatypes-may-not-have-members-of-size-0.patch
+Patch111:       
H5IMget_image_info-H5Sget_simple_extent_dims-does-not-exceed-array-size.patch
+
 BuildRequires:  fdupes
 %if 0%{?use_sz2}
 BuildRequires:  libsz2-devel
@@ -678,7 +691,19 @@
 %patch6 -p1
 # %%patch7 -p1
 %patch8 -p1
+%patch9 -p1
 %patch10 -p1
+%patch101 -p1
+%patch102 -p1
+%patch103 -p1
+%patch104 -p1
+%patch105 -p1
+%patch106 -p1
+%patch107 -p1
+%patch108 -p1
+%patch109 -p1
+%patch110 -p1
+%patch111 -p1
 
 %if %{without hpc}
 # baselibs looks different for different flavors - generate it on the fly

++++++ Compound-datatypes-may-not-have-members-of-size-0.patch ++++++
From: Egbert Eich <e...@suse.com>
Date: Wed Oct 5 15:47:54 2022 +0200
Subject: Compound datatypes may not have members of size 0
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: 88ea94d38fdfecba173dbea18502a5f82a46601b
References: 

A member size of 0 may lead to an FPE later on as reported in
CVE-2021-46244. To avoid this, check for this as soon as the
member is decoded.
This should probably be done in H5O_dtype_decode_helper() already,
however it is not clear whether all sizes are expected to be != 0.
This fixes CVE-2021-46244.

Signed-off-by: Egbert Eich <e...@suse.com>
Signed-off-by: Egbert Eich <e...@suse.de>
---
 src/H5Odtype.c | 6 ++++++
 src/H5T.c      | 2 ++
 2 files changed, 8 insertions(+)
diff --git a/src/H5Odtype.c b/src/H5Odtype.c
index 9af79f4e9a..d35fc65322 100644
--- a/src/H5Odtype.c
+++ b/src/H5Odtype.c
@@ -333,6 +333,12 @@ H5O__dtype_decode_helper(unsigned *ioflags /*in,out*/, 
const uint8_t **pp, H5T_t
                     H5MM_xfree(dt->shared->u.compnd.memb);
                     HGOTO_ERROR(H5E_DATATYPE, H5E_CANTDECODE, FAIL, "unable to 
decode member type")
                 } /* end if */
+                if (temp_type->shared->size == 0) {
+                    for (j = 0; j <= i; j++)
+                        H5MM_xfree(dt->shared->u.compnd.memb[j].name);
+                  H5MM_xfree(dt->shared->u.compnd.memb);
+                  HGOTO_ERROR(H5E_DATATYPE, H5E_CANTDECODE, FAIL, "invalid 
field size in member type")
+               }
 
                 /* Upgrade the version if we can and it is necessary */
                 if (can_upgrade && temp_type->shared->version > version) {
diff --git a/src/H5T.c b/src/H5T.c
index 3bb220ac26..04b96c5676 100644
--- a/src/H5T.c
+++ b/src/H5T.c
@@ -3591,6 +3591,8 @@ H5T__complete_copy(H5T_t *new_dt, const H5T_t *old_dt, 
H5T_shared_t *reopened_fo
                     if (new_dt->shared->u.compnd.memb[i].type->shared->size !=
                         
old_dt->shared->u.compnd.memb[old_match].type->shared->size) {
                         /* Adjust the size of the member */
+                        if (old_dt->shared->u.compnd.memb[old_match].size == 0)
+                            HGOTO_ERROR(H5E_DATATYPE, H5E_BADVALUE, FAIL, 
"invalid field size in datatype")
                         new_dt->shared->u.compnd.memb[i].size =
                             (old_dt->shared->u.compnd.memb[old_match].size * 
tmp->shared->size) /
                             
old_dt->shared->u.compnd.memb[old_match].type->shared->size;

++++++ Fix-error-message-not-the-name-but-the-link-information-is-parsed.patch 
++++++
From: Egbert Eich <e...@suse.com>
Date: Sun Oct 9 08:08:24 2022 +0200
Subject: Fix error message: not the name but the link information is parsed
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: 7b0b8bc5703ace47aec51d7f60c1149cd3e383b1
References: 

Signed-off-by: Egbert Eich <e...@suse.com>
Signed-off-by: Egbert Eich <e...@suse.de>
---
 src/H5Olink.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/H5Olink.c b/src/H5Olink.c
index 51c44a36b0..ee2a413dc1 100644
--- a/src/H5Olink.c
+++ b/src/H5Olink.c
@@ -245,7 +245,7 @@ H5O__link_decode(H5F_t *f, H5O_t H5_ATTR_UNUSED *open_oh, 
unsigned H5_ATTR_UNUSE
                 /* Make sure that length doesn't exceed buffer size, which 
could
                    occur when the file is corrupted */
                 if (p + len > p_end)
-                    HGOTO_ERROR(H5E_OHDR, H5E_OVERFLOW, NULL, "name length 
causes read past end of buffer")
+                    HGOTO_ERROR(H5E_OHDR, H5E_OVERFLOW, NULL, "link 
information length causes read past end of buffer")
 
                 if (NULL == (lnk->u.ud.udata = H5MM_malloc((size_t)len)))
                     HGOTO_ERROR(H5E_RESOURCE, H5E_NOSPACE, NULL, "memory 
allocation failed")

++++++ 
H5IMget_image_info-H5Sget_simple_extent_dims-does-not-exceed-array-size.patch 
++++++
From: Egbert Eich <e...@suse.com>
Date: Tue Sep 27 10:29:56 2022 +0200
Subject: H5IMget_image_info: H5Sget_simple_extent_dims() does not exceed array 
size
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: c1baab0937c8956a15efc41240f68d573c7b7324
References: 

Malformed hdf5 files may provide more dimensions than the array dim[] is
able to hold. Check number of elements first by calling
H5Sget_simple_extent_dims() with NULL for both 'dims' and 'maxdims' arguments.
This will cause the function to return only the number of dimensions.

This fixes CVE-2018-17439

Signed-off-by: Egbert Eich <e...@suse.com>
Signed-off-by: Egbert Eich <e...@suse.de>
---
 hl/src/H5IM.c | 2 ++
 1 file changed, 2 insertions(+)
diff --git a/hl/src/H5IM.c b/hl/src/H5IM.c
index ff10d573c7..e37c696e25 100644
--- a/hl/src/H5IM.c
+++ b/hl/src/H5IM.c
@@ -283,6 +283,8 @@ H5IMget_image_info(hid_t loc_id, const char *dset_name, 
hsize_t *width, hsize_t
     if ((sid = H5Dget_space(did)) < 0)
         goto out;
 
+    if (H5Sget_simple_extent_dims(sid, NULL, NULL) > IMAGE24_RANK)
+        goto out;
     /* Get dimensions */
     if (H5Sget_simple_extent_dims(sid, dims, NULL) < 0)
         goto out;

++++++ H5O__pline_decode-Make-more-resilient-to-out-of-bounds-read.patch ++++++
From: Egbert Eich <e...@suse.com>
Date: Tue Oct 4 23:09:01 2022 +0200
Subject: H5O__pline_decode() Make more resilient to out-of-bounds read
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: 35b798ca7542ce45ef016859b8e70d57b7f89cfe
References: 

Malformed hdf5 files may have trunkated content which does not match
the expected size. This function attempts to decode these it will read
past the end of the allocated space which may lead to a crash. Make sure
each element is within bounds before reading.

This fixes CVE-2019-8396.

Signed-off-by: Egbert Eich <e...@suse.com>
Signed-off-by: Egbert Eich <e...@suse.de>
---
 src/H5Opline.c  | 17 +++++++++++++++--
 src/H5private.h |  3 +++
 2 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/src/H5Opline.c b/src/H5Opline.c
index ffc4557ffc..a532aa4512 100644
--- a/src/H5Opline.c
+++ b/src/H5Opline.c
@@ -110,6 +110,14 @@ H5FL_DEFINE(H5O_pline_t);
  *
  *-------------------------------------------------------------------------
  */
+static char err[] = "ran off the end of the buffer: current p = %p, p_end = 
%p";
+
+#define VERIFY_LIMIT(p,s,l)                                        \
+  if (p + s - 1 > l) {                                             \
+    HCOMMON_ERROR(H5E_RESOURCE, H5E_NOSPACE, err, p + s, l);       \
+    HGOTO_DONE(NULL)                                               \
+  };
+
 static void *
 H5O__pline_decode(H5F_t H5_ATTR_UNUSED *f, H5O_t H5_ATTR_UNUSED *open_oh, 
unsigned H5_ATTR_UNUSED mesg_flags,
                   unsigned H5_ATTR_UNUSED *ioflags, size_t p_size, const 
uint8_t *p)
@@ -159,6 +167,7 @@ H5O__pline_decode(H5F_t H5_ATTR_UNUSED *f, H5O_t 
H5_ATTR_UNUSED *open_oh, unsign
     /* Decode filters */
     for (i = 0, filter = &pline->filter[0]; i < pline->nused; i++, filter++) {
         /* Filter ID */
+        VERIFY_LIMIT(p, 6, p_end) /* 6 bytes minimum */
         UINT16DECODE(p, filter->id);
 
         /* Length of filter name */
@@ -168,6 +177,7 @@ H5O__pline_decode(H5F_t H5_ATTR_UNUSED *f, H5O_t 
H5_ATTR_UNUSED *open_oh, unsign
             UINT16DECODE(p, name_length);
             if (pline->version == H5O_PLINE_VERSION_1 && name_length % 8)
                 HGOTO_ERROR(H5E_PLINE, H5E_CANTLOAD, NULL, "filter name length 
is not a multiple of eight")
+            VERIFY_LIMIT(p, 4, p_end) /* with name_length 4 bytes to go */
         } /* end if */
 
         /* Filter flags */
@@ -179,9 +189,12 @@ H5O__pline_decode(H5F_t H5_ATTR_UNUSED *f, H5O_t 
H5_ATTR_UNUSED *open_oh, unsign
         /* Filter name, if there is one */
         if (name_length) {
             size_t actual_name_length; /* Actual length of name */
-
+            size_t len = (size_t)(p_end - p + 1);
             /* Determine actual name length (without padding, but with null 
terminator) */
-            actual_name_length = HDstrlen((const char *)p) + 1;
+            actual_name_length = HDstrnlen((const char *)p, len);
+            if (actual_name_length == len)
+                HGOTO_ERROR(H5E_RESOURCE, H5E_NOSPACE, NULL, "filter name not 
null terminated")
+            actual_name_length += 1; /* include \0 byte */
             HDassert(actual_name_length <= name_length);
 
             /* Allocate space for the filter name, or use the internal buffer 
*/
diff --git a/src/H5private.h b/src/H5private.h
index bc00f120d2..3285c36441 100644
--- a/src/H5private.h
+++ b/src/H5private.h
@@ -1485,6 +1485,9 @@ H5_DLL H5_ATTR_CONST int Nflock(int fd, int operation);
 #ifndef HDstrlen
 #define HDstrlen(S) strlen(S)
 #endif
+#ifndef HDstrnlen
+#define HDstrnlen(S,L) strnlen(S,L)
+#endif
 #ifndef HDstrncat
 #define HDstrncat(X, Y, Z) strncat(X, Y, Z)
 #endif

++++++ 
H5O_dtype_decode_helper-Parent-of-enum-needs-to-have-same-size-as-enum-itself.patch
 ++++++
From: Egbert Eich <e...@suse.com>
Date: Wed Sep 28 14:54:58 2022 +0200
Subject: H5O_dtype_decode_helper: Parent of enum needs to have same size as 
enum itself
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: d39a27113ef75058f236b0606a74b4af5767c4e7
References: 

The size of the enumeration values is determined by the size of the parent.
Functions accessing the enumeration values use the size of the enumartion
to determine the size of each element and how much data to copy. Thus the
size of the enumeration and its parent need to match.
Check here to avoid unpleasant surprises later.

This fixes CVE-2018-14031.

Signed-off-by: Egbert Eich <e...@suse.com>
Signed-off-by: Egbert Eich <e...@suse.de>
---
 src/H5Odtype.c | 3 +++
 1 file changed, 3 insertions(+)
diff --git a/src/H5Odtype.c b/src/H5Odtype.c
index 9af79f4e9a..dc2b904362 100644
--- a/src/H5Odtype.c
+++ b/src/H5Odtype.c
@@ -472,6 +472,9 @@ H5O__dtype_decode_helper(unsigned *ioflags /*in,out*/, 
const uint8_t **pp, H5T_t
             if (H5O__dtype_decode_helper(ioflags, pp, dt->shared->parent) < 0)
                 HGOTO_ERROR(H5E_DATATYPE, H5E_CANTDECODE, FAIL, "unable to 
decode parent datatype")
 
+            if (dt->shared->parent->shared->size != dt->shared->size)
+                HGOTO_ERROR(H5E_DATATYPE, H5E_CANTDECODE, FAIL, "ENUM size 
does not match parent")
+
             /* Check if the parent of this enum has a version greater than the
              * enum itself. */
             H5O_DTYPE_CHECK_VERSION(dt, version, 
dt->shared->parent->shared->version, ioflags, "enum", FAIL)

++++++ H5O_fsinfo_decode-Make-more-resilient-to-out-of-bounds-read.patch ++++++
From: Egbert Eich <e...@suse.com>
Date: Wed Oct 5 07:17:24 2022 +0200
Subject: H5O_fsinfo_decode() Make more resilient to out-of-bounds read
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: 8aee14b3a19858a08e3fabdef6ff925b47d4ce2c
References: 

Malformed hdf5 files may have trunkated content which does not match
the expected size. This function attempts to decode these it will read
past the end of the allocated space which may lead to a crash. Make sure
each element is within bounds before reading.

This fixes CVE-2021-45830.

Signed-off-by: Egbert Eich <e...@suse.com>
Additions
Signed-off-by: Egbert Eich <e...@suse.de>
---
 src/H5Ofsinfo.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/src/H5Ofsinfo.c b/src/H5Ofsinfo.c
index 9f6514a291..15cbb5ae7b 100644
--- a/src/H5Ofsinfo.c
+++ b/src/H5Ofsinfo.c
@@ -88,6 +88,13 @@ H5FL_DEFINE_STATIC(H5O_fsinfo_t);
  *
  *-------------------------------------------------------------------------
  */
+static char err[] = "ran off end of input buffer while decoding";
+#define VERIFY_LIMIT(p,s,l)                                      \
+  if (p + s - 1 > l) {                                            \
+    HCOMMON_ERROR(H5E_RESOURCE, H5E_NOSPACE, err);                \
+    HGOTO_DONE(NULL)                                              \
+  }
+
 static void *
 H5O__fsinfo_decode(H5F_t *f, H5O_t H5_ATTR_UNUSED *open_oh, unsigned 
H5_ATTR_UNUSED mesg_flags,
                    unsigned H5_ATTR_UNUSED *ioflags, size_t p_size, const 
uint8_t *p)
@@ -112,6 +119,7 @@ H5O__fsinfo_decode(H5F_t *f, H5O_t H5_ATTR_UNUSED *open_oh, 
unsigned H5_ATTR_UNU
         fsinfo->fs_addr[ptype - 1] = HADDR_UNDEF;
 
     /* Version of message */
+    VERIFY_LIMIT(p,1,p_end)
     vers = *p++;
 
     if (vers == H5O_FSINFO_VERSION_0) {
@@ -125,6 +133,7 @@ H5O__fsinfo_decode(H5F_t *f, H5O_t H5_ATTR_UNUSED *open_oh, 
unsigned H5_ATTR_UNU
         fsinfo->pgend_meta_thres    = H5F_FILE_SPACE_PGEND_META_THRES;
         fsinfo->eoa_pre_fsm_fsalloc = HADDR_UNDEF;
 
+        VERIFY_LIMIT(p, 1 + H5F_SIZEOF_SIZE(f), p_end);
         strategy = (H5F_file_space_type_t)*p++; /* File space strategy */
         H5F_DECODE_LENGTH(f, p, threshold);     /* Free-space section 
threshold */
 
@@ -170,6 +179,7 @@ H5O__fsinfo_decode(H5F_t *f, H5O_t H5_ATTR_UNUSED *open_oh, 
unsigned H5_ATTR_UNU
         HDassert(vers >= H5O_FSINFO_VERSION_1);
 
         fsinfo->version  = vers;
+        VERIFY_LIMIT(p, 1 + 1 + 2 * H5F_SIZEOF_SIZE(f) + 2 + 
H5F_SIZEOF_ADDR(f), p_end);
         fsinfo->strategy = (H5F_fspace_strategy_t)*p++; /* File space strategy 
*/
         fsinfo->persist  = *p++;                        /* Free-space persist 
or not */
         H5F_DECODE_LENGTH(f, p, fsinfo->threshold);     /* Free-space section 
threshold */
@@ -181,9 +191,11 @@ H5O__fsinfo_decode(H5F_t *f, H5O_t H5_ATTR_UNUSED 
*open_oh, unsigned H5_ATTR_UNU
 
         /* Decode addresses of free space managers, if persisting */
         if (fsinfo->persist)
-            for (ptype = H5F_MEM_PAGE_SUPER; ptype < H5F_MEM_PAGE_NTYPES; 
ptype++)
+            for (ptype = H5F_MEM_PAGE_SUPER; ptype < H5F_MEM_PAGE_NTYPES; 
ptype++) {
+                VERIFY_LIMIT(p, H5F_SIZEOF_SIZE(f), p_end);
                 H5F_addr_decode(f, &p, &(fsinfo->fs_addr[ptype - 1]));
 
+            }
         fsinfo->mapped = FALSE;
     }
 

++++++ Hot-fix-for-CVE-2020-10812.patch ++++++
From: Egbert Eich <e...@suse.com>
Date: Wed Oct 5 09:44:02 2022 +0200
Subject: Hot fix for CVE-2020-10812
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: 2465fc41d208d57eb0d7d025286a81664148fbaf
References: 

CVE-2020-10812 unveils a more fundamental design flaw in H5F__dest():
this function returns FAIL if one of multiple operations fail (in this
case H5AC_prep_for_file_close()) while it still proceeds to prepare the
close operation, free the 'shared' member in struct H5F_t and ulimately
deallocate the structure itself.
When H5F__dest() signals back FAIL to the caller, the caller itself
(H5F_try_close() in this case) will fail. This failure is signalled
up the stack, thus the file will not be considered closed and another
attempt will be made to close it - latest in the exit handler.
The next attempt to close will however need the already deallocated
H5F_t structure and the H5T_shared_t structure in its 'shared' member,
however.
This fix papers over the failure of H5AC_prep_for_file_close() by not
changing the return status of H5F__dest() to fail. There are numerous
other opportunities where this will happen.
This may call for a more fundamental solution.

Signed-off-by: Egbert Eich <e...@suse.com>
Signed-off-by: Egbert Eich <e...@suse.de>
---
 src/H5Fint.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/H5Fint.c b/src/H5Fint.c
index 9b5613972f..01faf33495 100644
--- a/src/H5Fint.c
+++ b/src/H5Fint.c
@@ -1413,7 +1413,7 @@ H5F__dest(H5F_t *f, hbool_t flush)
          */
         if (H5AC_prep_for_file_close(f) < 0)
             /* Push error, but keep going */
-            HDONE_ERROR(H5E_FILE, H5E_CANTFLUSH, FAIL, "metadata cache prep 
for close failed")
+            HDONE_ERROR(H5E_FILE, H5E_CANTFLUSH, ret_value, "metadata cache 
prep for close failed")
 
         /* Flush at this point since the file will be closed (phase 2).
          * Only try to flush the file if it was opened with write access, and 
if

++++++ Make-sure-info-block-for-external-links-has-at-least-3-bytes.patch ++++++
From: Egbert Eich <e...@suse.com>
Date: Sun Oct 9 08:07:23 2022 +0200
Subject: Make sure info block for external links has at least 3 bytes
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: 082bfe392b04b1137da9eabd1ecac76c212ab385
References: 

According to the specification, the information block for external links
contains 1 byte of version/flag information and two 0 terminated strings
for the object linked to and the full path.
Although not very useful, the minimum string length for each would be one
byte.

This fixes CVE-2018-16438.

Signed-off-by: Egbert Eich <e...@suse.com>
Signed-off-by: Egbert Eich <e...@suse.de>
---
 src/H5Olink.c | 2 ++
 1 file changed, 2 insertions(+)
diff --git a/src/H5Olink.c b/src/H5Olink.c
index 51c44a36b0..074744b022 100644
--- a/src/H5Olink.c
+++ b/src/H5Olink.c
@@ -241,6 +241,8 @@ H5O__link_decode(H5F_t *f, H5O_t H5_ATTR_UNUSED *open_oh, 
unsigned H5_ATTR_UNUSE
             /* A UD link.  Get the user-supplied data */
             UINT16DECODE(p, len)
             lnk->u.ud.size = len;
+            if (lnk->type == H5L_TYPE_EXTERNAL && len < 3)
+                HGOTO_ERROR(H5E_OHDR, H5E_OVERFLOW, NULL, "external link 
information lenght < 3")
             if (len > 0) {
                 /* Make sure that length doesn't exceed buffer size, which 
could
                    occur when the file is corrupted */

++++++ 
Pass-compact-chunk-size-info-to-ensure-requested-elements-are-within-bounds.patch
 ++++++
From: Egbert Eich <e...@suse.com>
Date: Sat Oct 1 15:13:52 2022 +0200
Subject: Pass compact chunk size info to ensure requested elements are within 
bounds
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: 18300944261a9fa8f0087f99d9176f3757b1ec38
References: 

To avoid reading/writing elements out of bounds of a compact chunk, pass
size info and check whether all elements are within the size before attempting
to read/write these elements. Such accesses can occur when accessing malformed
hdf5 files.

This fixes CVE-2018-11205

Signed-off-by: Egbert Eich <e...@suse.com>
Signed-off-by: Egbert Eich <e...@suse.de>
---
 src/H5Dchunk.c   | 34 +++++++++++++++++++++++++++-------
 src/H5Dcompact.c |  5 +++++
 src/H5Dpkg.h     |  1 +
 3 files changed, 33 insertions(+), 7 deletions(-)
diff --git a/src/H5Dchunk.c b/src/H5Dchunk.c
index e6bf26ce89..94ad392cb7 100644
--- a/src/H5Dchunk.c
+++ b/src/H5Dchunk.c
@@ -128,6 +128,7 @@ typedef struct H5D_rdcc_ent_t {
     H5F_block_t            chunk_block;              /*offset/length of chunk 
in file        */
     hsize_t                chunk_idx;                /*index of chunk in 
dataset             */
     uint8_t *              chunk;                    /*the unfiltered chunk 
data        */
+    size_t                 size;                     /*size of chunk          
*/
     unsigned               idx;                      /*index in hash table     
       */
     struct H5D_rdcc_ent_t *next;                     /*next item in 
doubly-linked list    */
     struct H5D_rdcc_ent_t *prev;                     /*previous item in 
doubly-linked list    */
@@ -303,7 +304,7 @@ static unsigned H5D__chunk_hash_val(const H5D_shared_t 
*shared, const hsize_t *s
 static herr_t   H5D__chunk_flush_entry(const H5D_t *dset, H5D_rdcc_ent_t *ent, 
hbool_t reset);
 static herr_t   H5D__chunk_cache_evict(const H5D_t *dset, H5D_rdcc_ent_t *ent, 
hbool_t flush);
 static void *   H5D__chunk_lock(const H5D_io_info_t *io_info, H5D_chunk_ud_t 
*udata, hbool_t relax,
-                                hbool_t prev_unfilt_chunk);
+                                hbool_t prev_unfilt_chunk, size_t *ret_size);
 static herr_t   H5D__chunk_unlock(const H5D_io_info_t *io_info, const 
H5D_chunk_ud_t *udata, hbool_t dirty,
                                   void *chunk, uint32_t naccessed);
 static herr_t   H5D__chunk_cache_prune(const H5D_t *dset, size_t size);
@@ -2480,6 +2481,7 @@ H5D__chunk_read(H5D_io_info_t *io_info, const 
H5D_type_info_t *type_info, hsize_
     uint32_t      src_accessed_bytes  = 0;       /* Total accessed size in a 
chunk */
     hbool_t       skip_missing_chunks = FALSE;   /* Whether to skip missing 
chunks */
     herr_t        ret_value           = SUCCEED; /*return value        */
+    size_t        chunk_size = 0;
 
     FUNC_ENTER_STATIC
 
@@ -2565,11 +2567,12 @@ H5D__chunk_read(H5D_io_info_t *io_info, const 
H5D_type_info_t *type_info, hsize_
                 src_accessed_bytes = chunk_info->chunk_points * 
(uint32_t)type_info->src_type_size;
 
                 /* Lock the chunk into the cache */
-                if (NULL == (chunk = H5D__chunk_lock(io_info, &udata, FALSE, 
FALSE)))
+                if (NULL == (chunk = H5D__chunk_lock(io_info, &udata, FALSE, 
FALSE, &chunk_size)))
                     HGOTO_ERROR(H5E_IO, H5E_READERROR, FAIL, "unable to read 
raw data chunk")
 
                 /* Set up the storage buffer information for this chunk */
                 cpt_store.compact.buf = chunk;
+                cpt_store.compact.size = chunk_size;
 
                 /* Point I/O info at contiguous I/O info for this chunk */
                 chk_io_info = &cpt_io_info;
@@ -2629,6 +2632,7 @@ H5D__chunk_write(H5D_io_info_t *io_info, const 
H5D_type_info_t *type_info, hsize
     hbool_t       cpt_dirty;                    /* Temporary placeholder for 
compact storage "dirty" flag */
     uint32_t      dst_accessed_bytes = 0;       /* Total accessed size in a 
chunk */
     herr_t        ret_value          = SUCCEED; /* Return value        */
+    size_t        chunk_size;
 
     FUNC_ENTER_STATIC
 
@@ -2699,11 +2703,12 @@ H5D__chunk_write(H5D_io_info_t *io_info, const 
H5D_type_info_t *type_info, hsize
                 entire_chunk = FALSE;
 
             /* Lock the chunk into the cache */
-            if (NULL == (chunk = H5D__chunk_lock(io_info, &udata, 
entire_chunk, FALSE)))
+            if (NULL == (chunk = H5D__chunk_lock(io_info, &udata, 
entire_chunk, FALSE, &chunk_size)))
                 HGOTO_ERROR(H5E_IO, H5E_READERROR, FAIL, "unable to read raw 
data chunk")
 
             /* Set up the storage buffer information for this chunk */
             cpt_store.compact.buf = chunk;
+            cpt_store.compact.size = chunk_size;
 
             /* Point I/O info at main I/O info for this chunk */
             chk_io_info = &cpt_io_info;
@@ -3714,7 +3719,7 @@ done:
  *-------------------------------------------------------------------------
  */
 static void *
-H5D__chunk_lock(const H5D_io_info_t *io_info, H5D_chunk_ud_t *udata, hbool_t 
relax, hbool_t prev_unfilt_chunk)
+H5D__chunk_lock(const H5D_io_info_t *io_info, H5D_chunk_ud_t *udata, hbool_t 
relax, hbool_t prev_unfilt_chunk, size_t *ret_size)
 {
     const H5D_t *      dset = io_info->dset; /* Local pointer to the dataset 
info */
     const H5O_pline_t *pline =
@@ -3731,6 +3736,7 @@ H5D__chunk_lock(const H5D_io_info_t *io_info, 
H5D_chunk_ud_t *udata, hbool_t rel
     hbool_t             disable_filters = FALSE; /* Whether to disable filters 
(when adding to cache) */
     void *              chunk           = NULL;  /*the file chunk    */
     void *              ret_value       = NULL;  /* Return value         */
+    size_t              chunk_size_ret = 0;
 
     FUNC_ENTER_STATIC
 
@@ -3796,6 +3802,7 @@ H5D__chunk_lock(const H5D_io_info_t *io_info, 
H5D_chunk_ud_t *udata, hbool_t rel
                 ent->chunk = (uint8_t *)H5D__chunk_mem_xfree(ent->chunk, 
old_pline);
                 ent->chunk = (uint8_t *)chunk;
                 chunk      = NULL;
+                ent->size = chunk_size;
 
                 /* Mark the chunk as having filters disabled as well as "newly
                  * disabled" so it is inserted on flush */
@@ -3823,6 +3830,7 @@ H5D__chunk_lock(const H5D_io_info_t *io_info, 
H5D_chunk_ud_t *udata, hbool_t rel
                 ent->chunk = (uint8_t *)H5D__chunk_mem_xfree(ent->chunk, 
old_pline);
                 ent->chunk = (uint8_t *)chunk;
                 chunk      = NULL;
+                ent->size = chunk_size;
 
                 /* Mark the chunk as having filters enabled */
                 ent->edge_chunk_state &= ~(H5D_RDCC_DISABLE_FILTERS | 
H5D_RDCC_NEWLY_DISABLED_FILTERS);
@@ -3902,6 +3910,7 @@ H5D__chunk_lock(const H5D_io_info_t *io_info, 
H5D_chunk_ud_t *udata, hbool_t rel
             /* In the case that some dataset functions look through this data,
              * clear it to all 0s. */
             HDmemset(chunk, 0, chunk_size);
+            chunk_size_ret = chunk_size;
         } /* end if */
         else {
             /*
@@ -3924,6 +3933,7 @@ H5D__chunk_lock(const H5D_io_info_t *io_info, 
H5D_chunk_ud_t *udata, hbool_t rel
                                           my_chunk_alloc, chunk) < 0)
                     HGOTO_ERROR(H5E_IO, H5E_READERROR, NULL, "unable to read 
raw data chunk")
 
+                chunk_size_ret = my_chunk_alloc;
                 if (old_pline && old_pline->nused) {
                     H5Z_EDC_t err_detect; /* Error detection info */
                     H5Z_cb_t  filter_cb;  /* I/O filter callback function */
@@ -3937,6 +3947,7 @@ H5D__chunk_lock(const H5D_io_info_t *io_info, 
H5D_chunk_ud_t *udata, hbool_t rel
                     if (H5Z_pipeline(old_pline, H5Z_FLAG_REVERSE, 
&(udata->filter_mask), err_detect,
                                      filter_cb, &my_chunk_alloc, &buf_alloc, 
&chunk) < 0)
                         HGOTO_ERROR(H5E_DATASET, H5E_CANTFILTER, NULL, "data 
pipeline read failed")
+                    chunk_size_ret = buf_alloc;
 
                     /* Reallocate chunk if necessary */
                     if (udata->new_unfilt_chunk) {
@@ -3947,6 +3958,7 @@ H5D__chunk_lock(const H5D_io_info_t *io_info, 
H5D_chunk_ud_t *udata, hbool_t rel
                             HGOTO_ERROR(H5E_RESOURCE, H5E_NOSPACE, NULL,
                                         "memory allocation failed for raw data 
chunk")
                         } /* end if */
+                        chunk_size_ret = my_chunk_alloc;
                         H5MM_memcpy(chunk, tmp_chunk, chunk_size);
                         (void)H5D__chunk_mem_xfree(tmp_chunk, old_pline);
                     } /* end if */
@@ -3967,6 +3979,7 @@ H5D__chunk_lock(const H5D_io_info_t *io_info, 
H5D_chunk_ud_t *udata, hbool_t rel
                     HGOTO_ERROR(H5E_RESOURCE, H5E_NOSPACE, NULL,
                                 "memory allocation failed for raw data chunk")
 
+                chunk_size_ret = chunk_size;
                 if (H5P_is_fill_value_defined(fill, &fill_status) < 0)
                     HGOTO_ERROR(H5E_PLIST, H5E_CANTGET, NULL, "can't tell if 
fill value defined")
 
@@ -4032,6 +4045,7 @@ H5D__chunk_lock(const H5D_io_info_t *io_info, 
H5D_chunk_ud_t *udata, hbool_t rel
                 H5_CHECKED_ASSIGN(ent->rd_count, uint32_t, chunk_size, size_t);
                 H5_CHECKED_ASSIGN(ent->wr_count, uint32_t, chunk_size, size_t);
                 ent->chunk = (uint8_t *)chunk;
+                ent->size = chunk_size_ret;
 
                 /* Add it to the cache */
                 HDassert(NULL == rdcc->slot[udata->idx_hint]);
@@ -4065,6 +4079,7 @@ H5D__chunk_lock(const H5D_io_info_t *io_info, 
H5D_chunk_ud_t *udata, hbool_t rel
         HDassert(!ent->locked);
         ent->locked = TRUE;
         chunk       = ent->chunk;
+        chunk_size_ret = ent->size;
     } /* end if */
     else
         /*
@@ -4076,6 +4091,8 @@ H5D__chunk_lock(const H5D_io_info_t *io_info, 
H5D_chunk_ud_t *udata, hbool_t rel
 
     /* Set return value */
     ret_value = chunk;
+    if (ret_size != NULL)
+        *ret_size = chunk_size_ret;
 
 done:
     /* Release the fill buffer info, if it's been initialized */
@@ -4084,8 +4101,11 @@ done:
 
     /* Release the chunk allocated, on error */
     if (!ret_value)
-        if (chunk)
+        if (chunk) {
             chunk = H5D__chunk_mem_xfree(chunk, pline);
+            if (ret_size != NULL)
+                *ret_size = 0;
+        }
 
     FUNC_LEAVE_NOAPI(ret_value)
 } /* end H5D__chunk_lock() */
@@ -4884,7 +4904,7 @@ H5D__chunk_update_old_edge_chunks(H5D_t *dset, hsize_t 
old_dim[])
             if (H5F_addr_defined(chk_udata.chunk_block.offset) || (UINT_MAX != 
chk_udata.idx_hint)) {
                 /* Lock the chunk into cache.  H5D__chunk_lock will take care 
of
                  * updating the chunk to no longer be an edge chunk. */
-                if (NULL == (chunk = (void *)H5D__chunk_lock(&chk_io_info, 
&chk_udata, FALSE, TRUE)))
+                if (NULL == (chunk = (void *)H5D__chunk_lock(&chk_io_info, 
&chk_udata, FALSE, TRUE, NULL)))
                     HGOTO_ERROR(H5E_DATASET, H5E_READERROR, FAIL, "unable to 
lock raw data chunk")
 
                 /* Unlock the chunk */
@@ -5274,7 +5294,7 @@ H5D__chunk_prune_fill(H5D_chunk_it_ud1_t *udata, hbool_t 
new_unfilt_chunk)
         HGOTO_ERROR(H5E_DATASET, H5E_CANTSELECT, FAIL, "unable to select 
hyperslab")
 
     /* Lock the chunk into the cache, to get a pointer to the chunk buffer */
-    if (NULL == (chunk = (void *)H5D__chunk_lock(io_info, &chk_udata, FALSE, 
FALSE)))
+    if (NULL == (chunk = (void *)H5D__chunk_lock(io_info, &chk_udata, FALSE, 
FALSE, NULL)))
         HGOTO_ERROR(H5E_DATASET, H5E_READERROR, FAIL, "unable to lock raw data 
chunk")
 
     /* Fill the selection in the memory buffer */
diff --git a/src/H5Dcompact.c b/src/H5Dcompact.c
index b78693660d..21c37e8a08 100644
--- a/src/H5Dcompact.c
+++ b/src/H5Dcompact.c
@@ -245,6 +245,7 @@ H5D__compact_io_init(const H5D_io_info_t *io_info, const 
H5D_type_info_t H5_ATTR
     FUNC_ENTER_STATIC_NOERR
 
     io_info->store->compact.buf   = 
io_info->dset->shared->layout.storage.u.compact.buf;
+    io_info->store->compact.size = 
io_info->dset->shared->layout.storage.u.compact.size;
     io_info->store->compact.dirty = 
&io_info->dset->shared->layout.storage.u.compact.dirty;
 
     FUNC_LEAVE_NOAPI(SUCCEED)
@@ -278,6 +279,8 @@ H5D__compact_readvv(const H5D_io_info_t *io_info, size_t 
dset_max_nseq, size_t *
     FUNC_ENTER_STATIC
 
     HDassert(io_info);
+    if (io_info->store->compact.size < *(dset_offset_arr + dset_max_nseq - 1) 
+ *(dset_size_arr + dset_max_nseq - 1))
+        HGOTO_ERROR(H5E_IO, H5E_WRITEERROR, FAIL, "source size less than 
requested data")
 
     /* Use the vectorized memory copy routine to do actual work */
     if ((ret_value = H5VM_memcpyvv(io_info->u.rbuf, mem_max_nseq, 
mem_curr_seq, mem_size_arr, mem_offset_arr,
@@ -320,6 +323,8 @@ H5D__compact_writevv(const H5D_io_info_t *io_info, size_t 
dset_max_nseq, size_t
     FUNC_ENTER_STATIC
 
     HDassert(io_info);
+    if (io_info->store->compact.size < *(dset_offset_arr + dset_max_nseq - 1) 
+ *(dset_size_arr + dset_max_nseq - 1))
+        HGOTO_ERROR(H5E_IO, H5E_WRITEERROR, FAIL, "source size less than 
requested data")
 
     /* Use the vectorized memory copy routine to do actual work */
     if ((ret_value = H5VM_memcpyvv(io_info->store->compact.buf, dset_max_nseq, 
dset_curr_seq, dset_size_arr,
diff --git a/src/H5Dpkg.h b/src/H5Dpkg.h
index 64692c5d1d..8a4acd62e3 100644
--- a/src/H5Dpkg.h
+++ b/src/H5Dpkg.h
@@ -196,6 +196,7 @@ typedef struct {
 typedef struct {
     void *   buf;   /* Buffer for compact dataset */
     hbool_t *dirty; /* Pointer to dirty flag to mark */
+    size_t size;    /* Buffer size for compact dataset */
 } H5D_compact_storage_t;
 
 typedef union H5D_storage_t {

++++++ Report-error-if-dimensions-of-chunked-storage-in-data-layout-2.patch 
++++++
From: Egbert Eich <e...@suse.com>
Date: Wed Sep 28 19:11:16 2022 +0200
Subject: Report error if dimensions of chunked storage in data layout < 2
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: 34b621424504265cff3c33cf634a70efb52db180
References: 

For Data Layout Messages version 1 & 2 the specification state
that the value stored in the data field is 1 greater than the
number of dimensions in the dataspace. For version 3 this is
not explicitly stated but the implementation suggests it to be
the case.
Thus the set value needs to be at least 2. For dimensionality
< 2 an out-of-bounds access occurs as in CVE-2021-45833.

This fixes CVE-2021-45833.

Signed-off-by: Egbert Eich <e...@suse.com>
Signed-off-by: Egbert Eich <e...@suse.de>
---
 src/H5Olayout.c | 7 +++++++
 1 file changed, 7 insertions(+)
diff --git a/src/H5Olayout.c b/src/H5Olayout.c
index c939e72744..9fa9e36e8c 100644
--- a/src/H5Olayout.c
+++ b/src/H5Olayout.c
@@ -168,6 +168,10 @@ H5O__layout_decode(H5F_t *f, H5O_t H5_ATTR_UNUSED 
*open_oh, unsigned H5_ATTR_UNU
             p += ndims * 4; /* Skip over dimension sizes (32-bit quantities) */
         }                   /* end if */
         else {
+            if (ndims < 2)
+                HGOTO_ERROR(H5E_OHDR, H5E_CANTLOAD, NULL,
+                            "bad dimensions for chunked storage")
+
             mesg->u.chunk.ndims = ndims;
             for (u = 0; u < ndims; u++)
                 UINT32DECODE(p, mesg->u.chunk.dim[u]);
@@ -241,6 +245,9 @@ H5O__layout_decode(H5F_t *f, H5O_t H5_ATTR_UNUSED *open_oh, 
unsigned H5_ATTR_UNU
                     mesg->u.chunk.ndims = *p++;
                     if (mesg->u.chunk.ndims > H5O_LAYOUT_NDIMS)
                         HGOTO_ERROR(H5E_OHDR, H5E_CANTLOAD, NULL, 
"dimensionality is too large")
+                    if (mesg->u.chunk.ndims < 2)
+                        HGOTO_ERROR(H5E_OHDR, H5E_CANTLOAD, NULL,
+                                    "bad dimensions for chunked storage")
 
                     /* B-tree address */
                     H5F_addr_decode(f, &p, &(mesg->storage.u.chunk.idx_addr));

++++++ 
Validate-location-offset-of-the-accumulated-metadata-when-comparing.patch ++++++
From: Egbert Eich <e...@suse.com>
Date: Mon Oct 10 08:43:44 2022 +0200
Subject: Validate location (offset) of the accumulated metadata when comparing
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: 2cf9918ae66f023a2b6d44eb591ee2ac479a6e53
References: 

Initially, the accumulated metadata location is initialized to HADDR_UNDEF
- the highest available address. Bogus input files may provide a location
or size matching this value. Comparing this address against such bogus
values may provide false positives. This make sure, the value has been
initilized or fail the comparison early and let other parts of the
code deal with the bogus address/size.
Note: To avoid unnecessary checks, we have assumed that if the 'dirty'
member in the same structure is true the location is valid.

This fixes CVE-2018-13867.

Signed-off-by: Egbert Eich <e...@suse.com>
Signed-off-by: Egbert Eich <e...@suse.de>
---
 src/H5Faccum.c | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/src/H5Faccum.c b/src/H5Faccum.c
index aed5812e63..73bd4b811e 100644
--- a/src/H5Faccum.c
+++ b/src/H5Faccum.c
@@ -48,6 +48,7 @@
 #define H5F_ACCUM_THROTTLE  8
 #define H5F_ACCUM_THRESHOLD 2048
 #define H5F_ACCUM_MAX_SIZE  (1024 * 1024) /* Max. accum. buf size (max. I/Os 
will be 1/2 this size) */
+#define H5F_LOC_VALID(x) (x != HADDR_UNDEF)
 
 /******************/
 /* Local Typedefs */
@@ -126,8 +127,9 @@ H5F__accum_read(H5F_shared_t *f_sh, H5FD_mem_t map_type, 
haddr_t addr, size_t si
             HDassert(!accum->buf || (accum->alloc_size >= accum->size));
 
             /* Current read adjoins or overlaps with metadata accumulator */
-            if (H5F_addr_overlap(addr, size, accum->loc, accum->size) || 
((addr + size) == accum->loc) ||
-                (accum->loc + accum->size) == addr) {
+            if (H5F_LOC_VALID(accum->loc) &&
+                (H5F_addr_overlap(addr, size, accum->loc, accum->size) || 
((addr + size) == accum->loc) ||
+                (accum->loc + accum->size) == addr)) {
                 size_t  amount_before; /* Amount to read before current 
accumulator */
                 haddr_t new_addr;      /* New address of the accumulator 
buffer */
                 size_t  new_size;      /* New size of the accumulator buffer */
@@ -439,7 +441,8 @@ H5F__accum_write(H5F_shared_t *f_sh, H5FD_mem_t map_type, 
haddr_t addr, size_t s
             /* Check if there is already metadata in the accumulator */
             if (accum->size > 0) {
                 /* Check if the new metadata adjoins the beginning of the 
current accumulator */
-                if ((addr + size) == accum->loc) {
+                if (H5F_LOC_VALID(accum->loc)
+                    && (addr + size) == accum->loc) {
                     /* Check if we need to adjust accumulator size */
                     if (H5F__accum_adjust(accum, file, H5F_ACCUM_PREPEND, 
size) < 0)
                         HGOTO_ERROR(H5E_IO, H5E_CANTRESIZE, FAIL, "can't 
adjust metadata accumulator")
@@ -464,7 +467,8 @@ H5F__accum_write(H5F_shared_t *f_sh, H5FD_mem_t map_type, 
haddr_t addr, size_t s
                     accum->dirty_off = 0;
                 } /* end if */
                 /* Check if the new metadata adjoins the end of the current 
accumulator */
-                else if (addr == (accum->loc + accum->size)) {
+                else if (H5F_LOC_VALID(accum->loc) &&
+                         addr == (accum->loc + accum->size)) {
                     /* Check if we need to adjust accumulator size */
                     if (H5F__accum_adjust(accum, file, H5F_ACCUM_APPEND, size) 
< 0)
                         HGOTO_ERROR(H5E_IO, H5E_CANTRESIZE, FAIL, "can't 
adjust metadata accumulator")
@@ -485,7 +489,8 @@ H5F__accum_write(H5F_shared_t *f_sh, H5FD_mem_t map_type, 
haddr_t addr, size_t s
                     accum->size += size;
                 } /* end if */
                 /* Check if the piece of metadata being written overlaps the 
metadata accumulator */
-                else if (H5F_addr_overlap(addr, size, accum->loc, 
accum->size)) {
+                else if (H5F_LOC_VALID(accum->loc) &&
+                         H5F_addr_overlap(addr, size, accum->loc, 
accum->size)) {
                     size_t add_size; /* New size of the accumulator buffer */
 
                     /* Check if the new metadata is entirely within the 
current accumulator */
@@ -745,7 +750,8 @@ H5F__accum_write(H5F_shared_t *f_sh, H5FD_mem_t map_type, 
haddr_t addr, size_t s
             /* (Note that this could be improved by updating the accumulator
              *  with [some of] the information just read in. -QAK)
              */
-            if (H5F_addr_overlap(addr, size, accum->loc, accum->size)) {
+            if (H5F_LOC_VALID(accum->loc) &&
+                H5F_addr_overlap(addr, size, accum->loc, accum->size)) {
                 /* Check for write starting before beginning of accumulator */
                 if (H5F_addr_le(addr, accum->loc)) {
                     /* Check for write ending within accumulator */
@@ -868,6 +874,7 @@ H5F__accum_free(H5F_shared_t *f_sh, H5FD_mem_t 
H5_ATTR_UNUSED type, haddr_t addr
 
     /* Adjust the metadata accumulator to remove the freed block, if it 
overlaps */
     if ((f_sh->feature_flags & H5FD_FEAT_ACCUMULATE_METADATA) &&
+        H5F_LOC_VALID(accum->loc) &&
         H5F_addr_overlap(addr, size, accum->loc, accum->size)) {
         size_t overlap_size; /* Size of overlap with accumulator */
 

++++++ When-evicting-driver-info-block-NULL-the-corresponding-entry.patch ++++++
From: Egbert Eich <e...@suse.com>
Date: Thu Sep 29 13:47:30 2022 +0200
Subject: When evicting driver info block, NULL the corresponding entry
Patch-mainline: Not yet
Git-repo: ssh://eich@192.168.122.1:/home/eich/sources/HPC/hdf5
Git-commit: 6d5496f17ed5aa65cbb0498e0bf70b0d599dc336
References: 

This prevents it from another attempt to unpin it in H5F__dest() which may
happen due to malformed hdf5 files which leads to a segfault.

This fixes CVE-2021-46242

Signed-off-by: Egbert Eich <e...@suse.com>
Signed-off-by: Egbert Eich <e...@suse.de>
---
 src/H5Fsuper.c | 2 ++
 1 file changed, 2 insertions(+)
diff --git a/src/H5Fsuper.c b/src/H5Fsuper.c
index 60b045ae29..1283790c57 100644
--- a/src/H5Fsuper.c
+++ b/src/H5Fsuper.c
@@ -1044,6 +1044,8 @@ done:
             /* Evict the driver info block from the cache */
             if (sblock && H5AC_expunge_entry(f, H5AC_DRVRINFO, 
sblock->driver_addr, H5AC__NO_FLAGS_SET) < 0)
                 HDONE_ERROR(H5E_FILE, H5E_CANTEXPUNGE, FAIL, "unable to 
expunge driver info block")
+
+            f->shared->drvinfo = NULL;
         } /* end if */
 
         /* Unpin & discard superblock */

Reply via email to