Hello community,

here is the log from the commit of package sparsehash for openSUSE:Factory 
checked in at 2020-08-15 21:18:01
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/sparsehash (Old)
 and      /work/SRC/openSUSE:Factory/.sparsehash.new.3399 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "sparsehash"

Sat Aug 15 21:18:01 2020 rev:7 rq:826201 version:2.0.4

Changes:
--------
--- /work/SRC/openSUSE:Factory/sparsehash/sparsehash.changes    2019-03-01 
20:29:42.634007159 +0100
+++ /work/SRC/openSUSE:Factory/.sparsehash.new.3399/sparsehash.changes  
2020-08-15 21:18:17.867552019 +0200
@@ -1,0 +2,8 @@
+Thu Aug 13 07:37:31 UTC 2020 - Jan Engelhardt <jeng...@inai.de>
+
+- Update to release 2.0.4
+  * Corrected the memory usage claims to take into account
+    allocator overhead.
+  * Cleared some compiler warnings.
+
+-------------------------------------------------------------------

Old:
----
  sparsehash-2.0.3.tar.gz

New:
----
  sparsehash-2.0.4.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ sparsehash.spec ++++++
--- /var/tmp/diff_new_pack.MoF6Tp/_old  2020-08-15 21:18:18.451552353 +0200
+++ /var/tmp/diff_new_pack.MoF6Tp/_new  2020-08-15 21:18:18.455552355 +0200
@@ -1,7 +1,7 @@
 #
 # spec file for package sparsehash
 #
-# Copyright (c) 2019 SUSE LINUX GmbH, Nuernberg, Germany.
+# Copyright (c) 2020 SUSE LLC
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -17,16 +17,15 @@
 
 
 Name:           sparsehash
-Version:        2.0.3
+Version:        2.0.4
 Release:        0
 Summary:        Memory-efficient hash_map implementation
 License:        BSD-3-Clause
 Group:          Development/Libraries/C and C++
-Url:            https://github.com/sparsehash/sparsehash
+URL:            https://github.com/sparsehash/sparsehash
 Source:         
https://github.com/sparsehash/sparsehash/archive/sparsehash-%{version}.tar.gz
 BuildRequires:  gcc-c++
 BuildRequires:  pkg-config
-BuildRoot:      %{_tmppath}/%{name}-%{version}-build
 
 %description
 The Google SparseHash project contains several C++ template hash-map
@@ -45,7 +44,7 @@
 speed.
 
 %prep
-%setup -q -n %{name}-%{name}-%{version}
+%autosetup -p1 -n %{name}-%{name}-%{version}
 
 %build
 %configure
@@ -58,7 +57,6 @@
 rm %{buildroot}%{_datadir}/doc/%{name}-2.0.2/README_windows.txt
 
 %files devel
-%defattr(-,root,root)
 %doc %{_datadir}/doc/%{name}-2.0.2/
 %{_includedir}/google/
 %{_includedir}/sparsehash/

++++++ sparsehash-2.0.3.tar.gz -> sparsehash-2.0.4.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sparsehash-sparsehash-2.0.3/README 
new/sparsehash-sparsehash-2.0.4/README
--- old/sparsehash-sparsehash-2.0.3/README      2015-10-12 23:13:52.000000000 
+0200
+++ new/sparsehash-sparsehash-2.0.4/README      2019-07-18 12:57:37.000000000 
+0200
@@ -106,20 +106,24 @@
 In addition to the hash-map and hash-set classes, this package also
 provides sparsetable.h, an array implementation that uses space
 proportional to the number of elements in the array, rather than the
-maximum element index.  It uses very little space overhead: 1 bit per
-entry.  See doc/sparsetable.html for the API.
+maximum element index.  It uses very little space overhead: 2 to 5
+bits per entry.  See doc/sparsetable.html for the API.
 
 RESOURCE USAGE
 --------------
-* sparse_hash_map has memory overhead of about 2 bits per hash-map
-  entry.
+* sparse_hash_map has memory overhead of about 4 to 10 bits per 
+  hash-map entry, assuming a typical average occupancy of 50%.
 * dense_hash_map has a factor of 2-3 memory overhead: if your
   hashtable data takes X bytes, dense_hash_map will use 3X-4X memory
   total.
 
 Hashtables tend to double in size when resizing, creating an
 additional 50% space overhead.  dense_hash_map does in fact have a
-significant "high water mark" memory use requirement.
+significant "high water mark" memory use requirement, which is 6 times
+the size of hash entries in the table when resizing (when reaching 
+50% occupancy, the table resizes to double the previous size, and the 
+old table (2x) is copied to the new table (4x)).
+
 sparse_hash_map, however, is written to need very little space
 overhead when resizing: only a few bits per hashtable entry.
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sparsehash-sparsehash-2.0.3/doc/implementation.html 
new/sparsehash-sparsehash-2.0.4/doc/implementation.html
--- old/sparsehash-sparsehash-2.0.3/doc/implementation.html     2015-10-12 
23:13:52.000000000 +0200
+++ new/sparsehash-sparsehash-2.0.4/doc/implementation.html     2019-07-18 
12:57:37.000000000 +0200
@@ -66,7 +66,7 @@
 replaces vector[3] with the new value.  If the lookup fails, then the
 code must insert a new entry into the middle of the vector.  Again, to
 insert at position i, the code must count all the bitmap entries &lt;= i
-that are set to i.  This indicates the position to insert into the
+that are set to 1.  This indicates the position to insert into the
 vector.  All vector entries above that position must be moved to make
 room for the new entry.  This takes time, but still constant time
 since the vector has size at most M.</p>
@@ -131,6 +131,66 @@
 entry -- but take longer for inserts, deletes, and lookups.  A smaller
 M would use more overhead but make operations somewhat faster.</p>
 
+The numbers above assume that the allocator used doesn't require extra 
+memory. The default allocator (using malloc/free) typically has some overhead 
+for each allocation. If we assume 16 byte overhead per allocation, the 
+overhead becomes 4.6 bit per array entry (32 bit pointers) or 5.3 bit per 
+array entry (64 bit pointers) 
+
+<p>Each sparsegroup has:</p>
+
+<table>
+<thead>
+<tr>
+<th>member</th>
+<th>32 bit</th>
+<th>64 bit</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>pointer</td>
+<td>4 bytes</td>
+<td>8 bytes</td>
+</tr>
+<tr>
+<td>num_buckets</td>
+<td>2 bytes</td>
+<td>2 bytes</td>
+</tr>
+<tr>
+<td>bitmap</td>
+<td>6 bytes</td>
+<td>6 bytes</td>
+</tr>
+<tr>
+<td>total</td>
+<td>12 bytes = 96 bits</td>
+<td>16 bytes = 128 bits</td>
+</tr>
+<tr>
+<td>because this is the overhead for each sparsegroup (48 entries), we divide 
by 48</td>
+<td></td>
+<td></td>
+</tr>
+<tr>
+<td>overhead / entry</td>
+<td>96 / 48 = 2  bits</td>
+<td>128 / 48 = 2.67  bits</td>
+</tr>
+<tr>
+<td rowspan=3>additional overhead per allocation up to 16 bytes =  128 
bits</td>
+<td></td>
+<td></td>
+</tr>
+<tr>
+<td>max overhead / entry</td>
+<td>(96 + 128) / 48 = 4.67 bits</td>
+<td>(128 + 128) / 48 = 5.33 bits</td>
+</tr>
+</tbody>
+</table>
+
 <p>You can also look at some specific <A
 HREF="performance.html">performance numbers</A>.</p>
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sparsehash-sparsehash-2.0.3/src/hashtable_test.cc 
new/sparsehash-sparsehash-2.0.4/src/hashtable_test.cc
--- old/sparsehash-sparsehash-2.0.3/src/hashtable_test.cc       2015-10-12 
23:13:52.000000000 +0200
+++ new/sparsehash-sparsehash-2.0.4/src/hashtable_test.cc       2019-07-18 
12:57:37.000000000 +0200
@@ -1908,6 +1908,36 @@
   dense_hash_map<int, DenseIntMap<int>, Hasher, Hasher> ht3copy = ht3;
 }
 
+TEST(HashtableTest, ResizeWithoutShrink) {
+  const size_t N = 1000000L;
+  const size_t max_entries = 40;
+#define KEY(i, j)  (i * 4 + j) * 28 + 11
+
+  dense_hash_map<size_t, int> ht;
+  ht.set_empty_key(0);
+  ht.set_deleted_key(1);
+  ht.min_load_factor(0);
+  ht.max_load_factor(0.2);
+
+  for (size_t i = 0; i < N; ++i) {
+    for (size_t j = 0; j < max_entries; ++j) {
+      size_t key = KEY(i, j);
+      ht[key] = 0;
+    }
+    for (size_t j = 0; j < max_entries / 2; ++j) {
+      size_t key = KEY(i, j);
+      ht.erase(key);
+      ht[key + 1] = 0;
+    }
+    for (size_t j = 0; j < max_entries; ++j) {
+      size_t key = KEY(i, j);
+      ht.erase(key);
+      ht.erase(key + (j < max_entries / 2));
+    }
+    EXPECT_LT(ht.bucket_count(), 4096);
+  }
+}
+
 TEST(HashtableDeathTest, ResizeOverflow) {
   dense_hash_map<int, int> ht;
   EXPECT_DEATH(ht.resize(static_cast<size_t>(-1)),
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sparsehash-sparsehash-2.0.3/src/sparsehash/internal/densehashtable.h 
new/sparsehash-sparsehash-2.0.4/src/sparsehash/internal/densehashtable.h
--- old/sparsehash-sparsehash-2.0.3/src/sparsehash/internal/densehashtable.h    
2015-10-12 23:13:52.000000000 +0200
+++ new/sparsehash-sparsehash-2.0.4/src/sparsehash/internal/densehashtable.h    
2019-07-18 12:57:37.000000000 +0200
@@ -588,13 +588,19 @@
     // are currently taking up room).  But later, when we decide what
     // size to resize to, *don't* count deleted buckets, since they
     // get discarded during the resize.
-    const size_type needed_size = settings.min_buckets(num_elements + delta, 
0);
+    size_type needed_size = settings.min_buckets(num_elements + delta, 0);
     if ( needed_size <= bucket_count() )      // we have enough buckets
       return did_resize;
 
     size_type resize_to =
       settings.min_buckets(num_elements - num_deleted + delta, bucket_count());
 
+    // When num_deleted is large, we may still grow but we do not want to
+    // over expand.  So we reduce needed_size by a portion of num_deleted
+    // (the exact portion does not matter).  This is especially helpful
+    // when min_load_factor is zero (no shrink at all) to avoid doubling
+    // the bucket count to infinity.  See also test ResizeWithoutShrink.
+    needed_size = settings.min_buckets(num_elements - num_deleted / 4 + delta, 
0);
     if (resize_to < needed_size &&    // may double resize_to
         resize_to < (std::numeric_limits<size_type>::max)() / 2) {
       // This situation means that we have enough deleted elements,
@@ -1195,8 +1201,10 @@
     pointer realloc_or_die(pointer ptr, size_type n) {
       pointer retval = this->reallocate(ptr, n);
       if (retval == NULL) {
-        fprintf(stderr, "sparsehash: FATAL ERROR: failed to reallocate "
-                "%lu elements for ptr %p", static_cast<unsigned long>(n), ptr);
+        fprintf(stderr,
+                "sparsehash: FATAL ERROR: failed to reallocate "
+                "%lu elements for ptr %p",
+                static_cast<unsigned long>(n), static_cast<void*>(ptr));
         exit(1);
       }
       return retval;
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sparsehash-sparsehash-2.0.3/src/sparsehash/internal/libc_allocator_with_realloc.h
 
new/sparsehash-sparsehash-2.0.4/src/sparsehash/internal/libc_allocator_with_realloc.h
--- 
old/sparsehash-sparsehash-2.0.3/src/sparsehash/internal/libc_allocator_with_realloc.h
       2015-10-12 23:13:52.000000000 +0200
+++ 
new/sparsehash-sparsehash-2.0.4/src/sparsehash/internal/libc_allocator_with_realloc.h
       2019-07-18 12:57:37.000000000 +0200
@@ -65,7 +65,10 @@
     free(p);
   }
   pointer reallocate(pointer p, size_type n) {
-    return static_cast<pointer>(realloc(p, n * sizeof(value_type)));
+    // p points to a storage array whose objects have already been destroyed
+    // cast to void* to prevent compiler warnings about calling realloc() on
+    // an object which cannot be relocated in memory
+    return static_cast<pointer>(realloc(static_cast<void*>(p), n * 
sizeof(value_type)));
   }
 
   size_type max_size() const  {
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sparsehash-sparsehash-2.0.3/src/sparsehash/sparsetable 
new/sparsehash-sparsehash-2.0.4/src/sparsehash/sparsetable
--- old/sparsehash-sparsehash-2.0.3/src/sparsehash/sparsetable  2015-10-12 
23:13:52.000000000 +0200
+++ new/sparsehash-sparsehash-2.0.4/src/sparsehash/sparsetable  2019-07-18 
12:57:37.000000000 +0200
@@ -1088,7 +1088,9 @@
     // This is equivalent to memmove(), but faster on my Intel P4,
     // at least with gcc4.1 -O2 / glibc 2.3.6.
     for (size_type i = settings.num_buckets; i > offset; --i)
-      memcpy(group + i, group + i-1, sizeof(*group));
+      // cast to void* to prevent compiler warnings about writing to an object
+      // with no trivial copy-assignment
+      memcpy(static_cast<void*>(group + i), group + i-1, sizeof(*group));
   }
 
   // Create space at group[offset], without special assumptions about 
value_type
@@ -1154,7 +1156,10 @@
     // at lesat with gcc4.1 -O2 / glibc 2.3.6.
     assert(settings.num_buckets > 0);
     for (size_type i = offset; i < settings.num_buckets-1; ++i)
-      memcpy(group + i, group + i+1, sizeof(*group));  // hopefully inlined!
+      // cast to void* to prevent compiler warnings about writing to an object
+      // with no trivial copy-assignment
+      // hopefully inlined!
+      memcpy(static_cast<void*>(group + i), group + i+1, sizeof(*group));
     group = settings.realloc_or_die(group, settings.num_buckets-1);
   }
 
@@ -1591,7 +1596,7 @@
   }
 
   // And the reverse transformation.
-  size_type get_pos(const const_nonempty_iterator it) const {
+  size_type get_pos(const const_nonempty_iterator& it) const {
     difference_type current_row = it.row_current - it.row_begin;
     difference_type current_col = (it.col_current -
                                    groups[current_row].nonempty_begin());
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sparsehash-sparsehash-2.0.3/src/time_hash_map.cc 
new/sparsehash-sparsehash-2.0.4/src/time_hash_map.cc
--- old/sparsehash-sparsehash-2.0.3/src/time_hash_map.cc        2015-10-12 
23:13:52.000000000 +0200
+++ new/sparsehash-sparsehash-2.0.4/src/time_hash_map.cc        2019-07-18 
12:57:37.000000000 +0200
@@ -331,6 +331,8 @@
 };
 
 inline void Rusage::Reset() {
+  g_num_copies = 0;
+  g_num_hashes = 0;
 #if defined HAVE_SYS_RESOURCE_H
   getrusage(RUSAGE_SELF, &start);
 #elif defined HAVE_WINDOWS_H
@@ -721,7 +723,7 @@
   if (FLAGS_test_4_bytes)  test_all_maps< HashObject<4,4> >(4, iters/1);
   if (FLAGS_test_8_bytes)  test_all_maps< HashObject<8,8> >(8, iters/2);
   if (FLAGS_test_16_bytes)  test_all_maps< HashObject<16,16> >(16, iters/4);
-  if (FLAGS_test_256_bytes)  test_all_maps< HashObject<256,256> >(256, 
iters/32);
+  if (FLAGS_test_256_bytes)  test_all_maps< HashObject<256,32> >(256, 
iters/32);
 
   return 0;
 }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/sparsehash-sparsehash-2.0.3/src/windows/config.h 
new/sparsehash-sparsehash-2.0.4/src/windows/config.h
--- old/sparsehash-sparsehash-2.0.3/src/windows/config.h        2015-10-12 
23:13:52.000000000 +0200
+++ new/sparsehash-sparsehash-2.0.4/src/windows/config.h        2019-07-18 
12:57:37.000000000 +0200
@@ -6,27 +6,54 @@
 /* Namespace for Google classes */
 #define GOOGLE_NAMESPACE  ::google
 
+#if (_MSC_VER >= 1800 )
+
+/* the location of the header defining hash functions */
+#define HASH_FUN_H  <unordered_map>
+
+/* the location of <unordered_map> or <hash_map> */
+#define HASH_MAP_H  <unordered_map>
+
+/* the location of <unordered_set> or <hash_set> */
+#define HASH_SET_H  <unordered_set>
+
+/* define if the compiler has hash_map */
+#define HAVE_HASH_MAP  0
+
+/* define if the compiler has hash_set */
+#define HAVE_HASH_SET  0
+
+/* define if the compiler supports unordered_{map,set} */
+#define HAVE_UNORDERED_MAP 1
+
+#else /* Earlier than VSC++ 2013 */ 
+
 /* the location of the header defining hash functions */
 #define HASH_FUN_H  <hash_map>
 
 /* the location of <unordered_map> or <hash_map> */
 #define HASH_MAP_H  <hash_map>
 
-/* the namespace of the hash<> function */
-#define HASH_NAMESPACE  stdext
-
 /* the location of <unordered_set> or <hash_set> */
 #define HASH_SET_H  <hash_set>
 
-/* Define to 1 if you have the <google/malloc_extension.h> header file. */
-#undef HAVE_GOOGLE_MALLOC_EXTENSION_H
-
 /* define if the compiler has hash_map */
 #define HAVE_HASH_MAP  1
 
 /* define if the compiler has hash_set */
 #define HAVE_HASH_SET  1
 
+/* define if the compiler supports unordered_{map,set} */
+#undef HAVE_UNORDERED_MAP
+
+#endif
+
+/* the namespace of the hash<> function */
+#define HASH_NAMESPACE  stdext
+
+/* Define to 1 if you have the <google/malloc_extension.h> header file. */
+#undef HAVE_GOOGLE_MALLOC_EXTENSION_H
+
 /* Define to 1 if you have the <inttypes.h> header file. */
 #undef HAVE_INTTYPES_H
 
@@ -81,9 +108,6 @@
 /* Define to 1 if you have the <unistd.h> header file. */
 #undef HAVE_UNISTD_H
 
-/* define if the compiler supports unordered_{map,set} */
-#undef HAVE_UNORDERED_MAP
-
 /* Define to 1 if the system has the type `u_int16_t'. */
 #undef HAVE_U_INT16_T
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/sparsehash-sparsehash-2.0.3/src/windows/sparsehash/internal/sparseconfig.h 
new/sparsehash-sparsehash-2.0.4/src/windows/sparsehash/internal/sparseconfig.h
--- 
old/sparsehash-sparsehash-2.0.3/src/windows/sparsehash/internal/sparseconfig.h  
    2015-10-12 23:13:52.000000000 +0200
+++ 
new/sparsehash-sparsehash-2.0.4/src/windows/sparsehash/internal/sparseconfig.h  
    2019-07-18 12:57:37.000000000 +0200
@@ -6,8 +6,17 @@
 /* Namespace for Google classes */
 #define GOOGLE_NAMESPACE  ::google
 
+#if (_MSC_VER >= 1800 )
+
+/* the location of the header defining hash functions */
+#define HASH_FUN_H  <unordered_map>
+
+#else /* Earlier than VSC++ 2013 */ 
+
 /* the location of the header defining hash functions */
 #define HASH_FUN_H  <hash_map>
+ 
+#endif
 
 /* the namespace of the hash<> function */
 #define HASH_NAMESPACE  stdext


Reply via email to