[clang] [Serialization] Don't read all declaration id eagerly when merge the tables (PR #95506)

2024-06-24 Thread Chuanqi Xu via cfe-commits

ChuanqiXu9 wrote:

> I feel the increased complexity does not seem like a good trade-off if it 
> does not significantly improve performance. However I don't feel too strongly 
> about it, so I suggest you get a second opinion if you feel strongly that we 
> should land this.

Yeah, and I don't have a strong opinion neither. 

https://github.com/llvm/llvm-project/pull/95506
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Serialization] Don't read all declaration id eagerly when merge the tables (PR #95506)

2024-06-24 Thread Ilya Biryukov via cfe-commits

ilya-biryukov wrote:

I feel the increased complexity does not seem like a good trade-off if it does 
not significantly improve performance.
However I don't feel too strongly about it, so I suggest you get a second 
opinion if you feel strongly that we should land this.

https://github.com/llvm/llvm-project/pull/95506
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Serialization] Don't read all declaration id eagerly when merge the tables (PR #95506)

2024-06-20 Thread Chuanqi Xu via cfe-commits

ChuanqiXu9 wrote:

> > Yeah, I think this patch may be conceptually good except the extra memory 
> > using. But benchmarking, I tried it locally but didn't find observable 
> > effects. So I'd like to land this after 19's branching to give it more 
> > baking time.
> 
> Do you mean that no large increases in the memory use or that things are not 
> getting faster either? If it's the latter, I would vouch for keeping the code 
> as-is to avoid adding extra complexity. If this does lead to significant 
> performance wins, though, it's a different story.

Yeah, it is the later. But it might not be too complex since all its uses are 
limited to the file only. My point is on the contrast, if it doesn't lead to 
significant regression, we should do it. Since it is rare that a single patch 
did herotical optimizations. Some times the performance gains from the long 
tails.

https://github.com/llvm/llvm-project/pull/95506
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Serialization] Don't read all declaration id eagerly when merge the tables (PR #95506)

2024-06-20 Thread Ilya Biryukov via cfe-commits

ilya-biryukov wrote:

> Yeah, I think this patch may be conceptually good except the extra memory 
> using. But benchmarking, I tried it locally but didn't find observable 
> effects. So I'd like to land this after 19's branching to give it more baking 
> time.

Do you mean that no large increases in the memory use or that things are not 
getting faster either?
If it's the latter, I would vouch for keeping the code as-is to avoid adding 
extra complexity.
If this does lead to significant performance wins, though, it's a different 
story.

https://github.com/llvm/llvm-project/pull/95506
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Serialization] Don't read all declaration id eagerly when merge the tables (PR #95506)

2024-06-17 Thread Chuanqi Xu via cfe-commits

ChuanqiXu9 wrote:

> @ChuanqiXu9 are you still planning to chase this given that fixing the 
> hashing function would fix performance for the aforementioned patch?
> 
> The trade-off we are making here is hard to assess without benchmarks that 
> show how much latency we win and how much more memory we spend. I am not sure 
> if doing the benchmarks would be high on anyone's priority list, but maybe 
> I'm wrong?

Yeah, I think this patch may be conceptually good except the extra memory 
using. But benchmarking, I tried it locally but didn't find observable effects. 
So I'd like to land this after 19's branching to give it more baking time.

https://github.com/llvm/llvm-project/pull/95506
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Serialization] Don't read all declaration id eagerly when merge the tables (PR #95506)

2024-06-17 Thread Ilya Biryukov via cfe-commits

ilya-biryukov wrote:

@ChuanqiXu9 are you still planning to chase this given that fixing the hashing 
function would fix performance for the aforementioned patch?

The trade-off we are making here is hard to assess without benchmarks that show 
how much latency we win and how much more memory we spend. I am not sure if 
doing the benchmarks would be high on anyone's priority list, but maybe?

https://github.com/llvm/llvm-project/pull/95506
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Serialization] Don't read all declaration id eagerly when merge the tables (PR #95506)

2024-06-13 Thread via cfe-commits

llvmbot wrote:




@llvm/pr-subscribers-clang

Author: Chuanqi Xu (ChuanqiXu9)


Changes

See the post commit message in
https://github.com/llvm/llvm-project/pull/92083 for rationale.

Previously, when we merge the lookup tables, we will read the tables completely 
to get the data. But the above page shows that it might be problematic in 
specific cases where there a lot of DeclIDs to be read.

In this patch, we mitigated the problem by not reading the contents of the 
merged tables. But we may need to pay for the additional memory usages.

---
Full diff: https://github.com/llvm/llvm-project/pull/95506.diff


1 Files Affected:

- (modified) clang/lib/Serialization/MultiOnDiskHashTable.h (+54-10) 


``diff
diff --git a/clang/lib/Serialization/MultiOnDiskHashTable.h 
b/clang/lib/Serialization/MultiOnDiskHashTable.h
index a0d75ec3a9e76..f98394e904549 100644
--- a/clang/lib/Serialization/MultiOnDiskHashTable.h
+++ b/clang/lib/Serialization/MultiOnDiskHashTable.h
@@ -73,6 +73,51 @@ template class MultiOnDiskHashTable {
   struct MergedTable {
 std::vector Files;
 llvm::DenseMap Data;
+
+struct UnreadDataInfo {
+  Info *InfoObj;
+  const unsigned char *start;
+  unsigned DataLen;
+};
+
+llvm::DenseMap>
+UnReadData;
+
+std::vector MergedOnDiskTables;
+
+void clear() {
+  for (OnDiskTable *ODT : MergedOnDiskTables)
+delete ODT;
+
+  MergedOnDiskTables.clear();
+}
+
+void readAll() {
+  for (const auto &Iter : UnReadData) {
+internal_key_type Key = Iter.first;
+data_type_builder ValueBuilder(Data[Key]);
+
+for (const UnreadDataInfo &I : Iter.second)
+  I.InfoObj->ReadDataInto(Key, I.start, I.DataLen, ValueBuilder);
+  }
+
+  UnReadData.clear();
+  clear();
+}
+
+data_type find(internal_key_type Key) {
+  auto UnreadIter = UnReadData.find(Key);
+  if (UnreadIter != UnReadData.end()) {
+data_type_builder ValueBuilder(Data[Key]);
+for (auto &I : UnreadIter->second)
+  I.InfoObj->ReadDataInto(Key, I.start, I.DataLen, ValueBuilder);
+UnReadData.erase(UnreadIter);
+  }
+
+  return Data[Key];
+}
+
+~MergedTable() { clear(); }
   };
 
   using Table = llvm::PointerUnion;
@@ -159,13 +204,14 @@ template class MultiOnDiskHashTable {
 // FIXME: Don't rely on the OnDiskHashTable format here.
 auto L = InfoObj.ReadKeyDataLength(LocalPtr);
 const internal_key_type &Key = InfoObj.ReadKey(LocalPtr, L.first);
-data_type_builder ValueBuilder(Merged->Data[Key]);
-InfoObj.ReadDataInto(Key, LocalPtr + L.first, L.second,
- ValueBuilder);
+// Make sure Merged->Data contains every key.
+(void)Merged->Data[Key];
+Merged->UnReadData[Key].push_back(
+{&InfoObj, LocalPtr + L.first, L.second});
   }
 
   Merged->Files.push_back(ODT->File);
-  delete ODT;
+  Merged->MergedOnDiskTables.push_back(ODT);
 }
 
 Tables.clear();
@@ -239,11 +285,8 @@ template class MultiOnDiskHashTable {
 internal_key_type Key = Info::GetInternalKey(EKey);
 auto KeyHash = Info::ComputeHash(Key);
 
-if (MergedTable *M = getMergedTable()) {
-  auto It = M->Data.find(Key);
-  if (It != M->Data.end())
-Result = It->second;
-}
+if (MergedTable *M = getMergedTable())
+  Result = M->find(Key);
 
 data_type_builder ResultBuilder(Result);
 
@@ -268,6 +311,7 @@ template class MultiOnDiskHashTable {
   removeOverriddenTables();
 
 if (MergedTable *M = getMergedTable()) {
+  M->readAll();
   for (auto &KV : M->Data)
 Info::MergeDataInto(KV.second, ResultBuilder);
 }
@@ -327,7 +371,7 @@ class MultiOnDiskHashTableGenerator {
 // Add all merged entries from Base to the generator.
 for (auto &KV : Merged->Data) {
   if (!Gen.contains(KV.first, Info))
-Gen.insert(KV.first, Info.ImportData(KV.second), Info);
+Gen.insert(KV.first, Info.ImportData(Merged->find(KV.first)), 
Info);
 }
   } else {
 Writer.write(0);

``




https://github.com/llvm/llvm-project/pull/95506
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [Serialization] Don't read all declaration id eagerly when merge the tables (PR #95506)

2024-06-13 Thread Chuanqi Xu via cfe-commits

https://github.com/ChuanqiXu9 created 
https://github.com/llvm/llvm-project/pull/95506

See the post commit message in
https://github.com/llvm/llvm-project/pull/92083 for rationale.

Previously, when we merge the lookup tables, we will read the tables completely 
to get the data. But the above page shows that it might be problematic in 
specific cases where there a lot of DeclIDs to be read.

In this patch, we mitigated the problem by not reading the contents of the 
merged tables. But we may need to pay for the additional memory usages.

>From 5adb69ef86de078f2b6446a16501941dfbdf4f57 Mon Sep 17 00:00:00 2001
From: Chuanqi Xu 
Date: Fri, 14 Jun 2024 13:05:05 +0800
Subject: [PATCH] [Serialization] Don't read all declaration id eagerly when
 merge the tables

See the post commit message in
https://github.com/llvm/llvm-project/pull/92083 for rationale.

Previously, when we merge the lookup tables, we will read the tables
completely to get the data. But the above page shows that it might be
problematic in specific cases where there a lot of DeclIDs to be read.

In this patch, we mitigated the problem by not reading the contents of
the merged tables. But we may need to pay for the additional memory
usages.
---
 .../lib/Serialization/MultiOnDiskHashTable.h  | 64 ---
 1 file changed, 54 insertions(+), 10 deletions(-)

diff --git a/clang/lib/Serialization/MultiOnDiskHashTable.h 
b/clang/lib/Serialization/MultiOnDiskHashTable.h
index a0d75ec3a9e76..f98394e904549 100644
--- a/clang/lib/Serialization/MultiOnDiskHashTable.h
+++ b/clang/lib/Serialization/MultiOnDiskHashTable.h
@@ -73,6 +73,51 @@ template class MultiOnDiskHashTable {
   struct MergedTable {
 std::vector Files;
 llvm::DenseMap Data;
+
+struct UnreadDataInfo {
+  Info *InfoObj;
+  const unsigned char *start;
+  unsigned DataLen;
+};
+
+llvm::DenseMap>
+UnReadData;
+
+std::vector MergedOnDiskTables;
+
+void clear() {
+  for (OnDiskTable *ODT : MergedOnDiskTables)
+delete ODT;
+
+  MergedOnDiskTables.clear();
+}
+
+void readAll() {
+  for (const auto &Iter : UnReadData) {
+internal_key_type Key = Iter.first;
+data_type_builder ValueBuilder(Data[Key]);
+
+for (const UnreadDataInfo &I : Iter.second)
+  I.InfoObj->ReadDataInto(Key, I.start, I.DataLen, ValueBuilder);
+  }
+
+  UnReadData.clear();
+  clear();
+}
+
+data_type find(internal_key_type Key) {
+  auto UnreadIter = UnReadData.find(Key);
+  if (UnreadIter != UnReadData.end()) {
+data_type_builder ValueBuilder(Data[Key]);
+for (auto &I : UnreadIter->second)
+  I.InfoObj->ReadDataInto(Key, I.start, I.DataLen, ValueBuilder);
+UnReadData.erase(UnreadIter);
+  }
+
+  return Data[Key];
+}
+
+~MergedTable() { clear(); }
   };
 
   using Table = llvm::PointerUnion;
@@ -159,13 +204,14 @@ template class MultiOnDiskHashTable {
 // FIXME: Don't rely on the OnDiskHashTable format here.
 auto L = InfoObj.ReadKeyDataLength(LocalPtr);
 const internal_key_type &Key = InfoObj.ReadKey(LocalPtr, L.first);
-data_type_builder ValueBuilder(Merged->Data[Key]);
-InfoObj.ReadDataInto(Key, LocalPtr + L.first, L.second,
- ValueBuilder);
+// Make sure Merged->Data contains every key.
+(void)Merged->Data[Key];
+Merged->UnReadData[Key].push_back(
+{&InfoObj, LocalPtr + L.first, L.second});
   }
 
   Merged->Files.push_back(ODT->File);
-  delete ODT;
+  Merged->MergedOnDiskTables.push_back(ODT);
 }
 
 Tables.clear();
@@ -239,11 +285,8 @@ template class MultiOnDiskHashTable {
 internal_key_type Key = Info::GetInternalKey(EKey);
 auto KeyHash = Info::ComputeHash(Key);
 
-if (MergedTable *M = getMergedTable()) {
-  auto It = M->Data.find(Key);
-  if (It != M->Data.end())
-Result = It->second;
-}
+if (MergedTable *M = getMergedTable())
+  Result = M->find(Key);
 
 data_type_builder ResultBuilder(Result);
 
@@ -268,6 +311,7 @@ template class MultiOnDiskHashTable {
   removeOverriddenTables();
 
 if (MergedTable *M = getMergedTable()) {
+  M->readAll();
   for (auto &KV : M->Data)
 Info::MergeDataInto(KV.second, ResultBuilder);
 }
@@ -327,7 +371,7 @@ class MultiOnDiskHashTableGenerator {
 // Add all merged entries from Base to the generator.
 for (auto &KV : Merged->Data) {
   if (!Gen.contains(KV.first, Info))
-Gen.insert(KV.first, Info.ImportData(KV.second), Info);
+Gen.insert(KV.first, Info.ImportData(Merged->find(KV.first)), 
Info);
 }
   } else {
 Writer.write(0);

___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits