vitalybuka wrote:
This failure is caused by the patch
https://lab.llvm.org/buildbot/#/builders/239/builds/6363/steps/10/logs/stdio
cc @hctim
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https:
urnathan wrote:
Be aware of bug #87227 and PR #87238 to address that
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
urnathan wrote:
Committed, expect tailclipping cleanup PR soon
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
https://github.com/urnathan closed
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
AtariDreams wrote:
Ping?
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
https://github.com/rjmccall approved this pull request.
Okay. Thanks for putting up with such a protracted review! I really like
where we've ended up.
LGTM. Please wait a few days before merging to give other reviewers a chance
to give final feedback.
https://github.com/llvm/llvm-project/p
github-actions[bot] wrote:
:white_check_mark: With the latest revision this PR passed the Python code
formatter.
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/m
urnathan wrote:
@rjmccall thanks, here is an update with those changes. I've collapsed it to
the 3 commits mentioned earlier
1) Tests marked up for the current behaviour
2) TargetInfo strict/flexible alignment load predicate
3) The new algorithm
https://github.com/llvm/llvm-project/pull/65742
@@ -439,82 +444,247 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,247 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,247 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,247 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,247 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,247 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
https://github.com/rjmccall edited
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
@@ -439,82 +444,247 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,247 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
https://github.com/rjmccall commented:
This looks great, thanks! I have a few editorial nits, and there's an
assertion that seems off, but otherwise this looks ready to go.
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
@@ -439,82 +444,247 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
urnathan wrote:
Sorry to push another update, but I realized the LimitOffset computation could
be sunk to the point of use, and therefore not computed in all cases.
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commi
rjmccall wrote:
Yeah, we don't need to care about the actual bit offset of the zero-width
bit-field as long as we honor the non-interference properties across it. I'll
take a look at the patch, thanks.
https://github.com/llvm/llvm-project/pull/65742
___
urnathan wrote:
@rjmccall here is a rebase an update, which I think addresses all your
comments. I did make some material changes though:
1) I removed the Volatile handling. I was always a little uncomfortable with it
because it didn't affect the access units of a non-volatile bitfield that e
https://github.com/rjmccall edited
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
https://github.com/rjmccall edited
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
https://github.com/rjmccall edited
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
https://github.com/rjmccall edited
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
wzssyqa wrote:
> For MIPSr6, it is just like AARCH64, since some microarchitecture doesn't
> support mis-unaligned well in hardware level, so we need an options to
> disable it: kernel may need it.
>
> For GCC, we have `-mno-unalgined-access`. We need also add this option to
> clang.
https:/
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
https://github.com/rjmccall edited
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
@@ -439,82 +444,194 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
Members.push_back(MemberInfo(bitsToCharUnits(StartBitOffset),
MemberInfo::Field, nullptr, *Field));
}
-return;
+return Field;
https://github.com/rjmccall commented:
Thank you, the structure looks great! Mostly style and clarity comments from
here.
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/c
urnathan wrote:
@rjmccall here's a refactoring along the lines you suggested. Once one stops
worrying about rescanning when the next access unit fails to accumulate into
the current one, things get simpler. The compiler should be able to turn the
boolean conditional flow within the loop body i
urnathan wrote:
This update
* moves the creation and installation of the BitFieldAccessUnit objects into
CGRecordLowering.
* adds aarch64 and arm darwin target tests, as those have different layout
rules (let me know if I've missed something here).
* Simplifies the checking of barriers -- an e
@@ -394,33 +412,155 @@ void CGRecordLowering::accumulateFields() {
: getStorageType(*Field),
*Field));
++Field;
-} else {
- ++Field;
}
}
}
-void
-CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
-
@@ -394,33 +412,155 @@ void CGRecordLowering::accumulateFields() {
: getStorageType(*Field),
*Field));
++Field;
-} else {
- ++Field;
}
}
}
-void
-CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
-
@@ -394,33 +412,155 @@ void CGRecordLowering::accumulateFields() {
: getStorageType(*Field),
*Field));
++Field;
-} else {
- ++Field;
}
}
}
-void
-CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
-
@@ -394,33 +412,155 @@ void CGRecordLowering::accumulateFields() {
: getStorageType(*Field),
*Field));
++Field;
-} else {
- ++Field;
}
}
}
-void
-CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
-
@@ -858,6 +861,10 @@ class TargetInfo : public TransferrableTargetInfo,
return PointerWidth;
}
+ /// Return true, iff unaligned accesses are a single instruction (rather than
rjmccall wrote:
```suggestion
/// Return true iff unaligned accesses are a
@@ -394,33 +412,155 @@ void CGRecordLowering::accumulateFields() {
: getStorageType(*Field),
*Field));
++Field;
-} else {
- ++Field;
}
}
}
-void
-CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
-
@@ -394,33 +412,155 @@ void CGRecordLowering::accumulateFields() {
: getStorageType(*Field),
*Field));
++Field;
-} else {
- ++Field;
}
}
}
-void
-CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
-
https://github.com/rjmccall edited
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
https://github.com/rjmccall commented:
I think using the existing information as a default is fine, but please
continue to call the hook something like `hasCheapUnalignedAccess()` to
preserve the intent, even if it always just calls the other hook.
https://github.com/llvm/llvm-project/pull/657
urnathan wrote:
> >
>
> Thinking further, several (AArch64, ARM & Loongson) targets have a
> 'HasUnaligned' bool in their TargetInfo objects. Perhaps the way forwards is
> to promote that to the base TargetInfo and just use that, rather than the
> bitfield-specific field I added (which just
github-actions[bot] wrote:
:warning: C/C++ code formatter, clang-format found issues in your code.
:warning:
You can test this locally with the following command:
``bash
git-clang-format --diff 8cfb71613c452dd45a84a74affe8464bfd33de02
b4cb8945224633fef27b69f0eed34f505032f76e --
urnathan wrote:
> Yeah, LLVM supports changing subtarget options on a per-function basis. We
> would presumably make the query based on the global setting.
>
> Anyway, getting this information from LLVM doesn't seem tractable in the
> short-to-medium term; it's just unfortunate.
Thinking furt
@@ -132,6 +132,7 @@ class LLVM_LIBRARY_VISIBILITY LoongArch64TargetInfo
: LoongArchTargetInfo(Triple, Opts) {
LongWidth = LongAlign = PointerWidth = PointerAlign = 64;
IntMaxType = Int64Type = SignedLong;
+HasCheapUnalignedBitfieldAccess = true;
--
@@ -132,6 +132,7 @@ class LLVM_LIBRARY_VISIBILITY LoongArch64TargetInfo
: LoongArchTargetInfo(Triple, Opts) {
LongWidth = LongAlign = PointerWidth = PointerAlign = 64;
IntMaxType = Int64Type = SignedLong;
+HasCheapUnalignedBitfieldAccess = true;
--
rjmccall wrote:
> > On the target hook, it's a shame we can't easily get this information from
> > LLVM. I believe it's already there — `TargetLowering` has an
> > `allowsMisalignedMemoryAccesses` method that includes some approximation of
> > how fast a particular access would be. In practice
@@ -132,6 +132,7 @@ class LLVM_LIBRARY_VISIBILITY LoongArch64TargetInfo
: LoongArchTargetInfo(Triple, Opts) {
LongWidth = LongAlign = PointerWidth = PointerAlign = 64;
IntMaxType = Int64Type = SignedLong;
+HasCheapUnalignedBitfieldAccess = true;
--
urnathan wrote:
> On the target hook, it's a shame we can't easily get this information from
> LLVM. I believe it's already there — `TargetLowering` has an
> `allowsMisalignedMemoryAccesses` method that includes some approximation of
> how fast a particular access would be. In practice, it see
@@ -132,6 +132,7 @@ class LLVM_LIBRARY_VISIBILITY LoongArch64TargetInfo
: LoongArchTargetInfo(Triple, Opts) {
LongWidth = LongAlign = PointerWidth = PointerAlign = 64;
IntMaxType = Int64Type = SignedLong;
+HasCheapUnalignedBitfieldAccess = true;
--
https://github.com/wzssyqa edited
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
https://github.com/wzssyqa edited
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
@@ -132,6 +132,7 @@ class LLVM_LIBRARY_VISIBILITY LoongArch64TargetInfo
: LoongArchTargetInfo(Triple, Opts) {
LongWidth = LongAlign = PointerWidth = PointerAlign = 64;
IntMaxType = Int64Type = SignedLong;
+HasCheapUnalignedBitfieldAccess = true;
--
@@ -132,6 +132,7 @@ class LLVM_LIBRARY_VISIBILITY LoongArch64TargetInfo
: LoongArchTargetInfo(Triple, Opts) {
LongWidth = LongAlign = PointerWidth = PointerAlign = 64;
IntMaxType = Int64Type = SignedLong;
+HasCheapUnalignedBitfieldAccess = true;
--
@@ -132,6 +132,7 @@ class LLVM_LIBRARY_VISIBILITY LoongArch64TargetInfo
: LoongArchTargetInfo(Triple, Opts) {
LongWidth = LongAlign = PointerWidth = PointerAlign = 64;
IntMaxType = Int64Type = SignedLong;
+HasCheapUnalignedBitfieldAccess = true;
--
wzssyqa wrote:
> `-mno-unalgined-access` has been added since clang17:
> https://reviews.llvm.org/D149946
This option is for LoongArch instead of MIPSr6.
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.l
SixWeining wrote:
> For GCC, we have `-mno-unalgined-access`. We need also add this option to
> clang.
`-mno-unalgined-access` has been added since clang17:
https://reviews.llvm.org/D149946
https://github.com/llvm/llvm-project/pull/65742
___
cfe-com
wzssyqa wrote:
For MIPSr6, it is just like AARCH64, since some microarchitecture doesn't
support mis-unaligned well in hardware level, so we need an options to disable
it: kernel may need it.
For GCC, we have `-mno-unalgined-access`. We need also add this option to clang.
https://github.com/l
rjmccall wrote:
> @rjmccall thanks for your comments. Here's a reimplementation using a single
> loop -- the downside is that we may end up repeating the initial grouping
> scan, if we search more than 1 Access Unit ahead and fail to find a 'good'
> access unit. This update changes the algorit
urnathan wrote:
@rjmccall thanks for your comments. Here's a reimplementation using a single
loop -- the downside is that we may end up repeating the initial grouping scan,
if we search more than 1 Access Unit ahead and fail to find a 'good' access
unit. This update changes the algorithm sli
@@ -376,33 +377,41 @@ void CGRecordLowering::lowerUnion(bool isNoUniqueAddress)
{
}
void CGRecordLowering::accumulateFields() {
- for (RecordDecl::field_iterator Field = D->field_begin(),
- FieldEnd = D->field_end();
-Field != FieldEnd;)
@@ -442,79 +455,235 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
return;
}
- // Check if OffsetInRecord (the size in bits of the current run) is better
- // as a single field run. When OffsetInRecord has legal integer width, and
- // its
@@ -415,12 +424,16 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
uint64_t StartBitOffset, Tail = 0;
if (isDiscreteBitFieldABI()) {
for (; Field != FieldEnd; ++Field) {
- uint64_t BitOffset = getFieldBitOffset(*Field);
+ if (!Field
@@ -442,79 +455,235 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
return;
}
- // Check if OffsetInRecord (the size in bits of the current run) is better
- // as a single field run. When OffsetInRecord has legal integer width, and
- // its
@@ -442,79 +455,235 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
return;
}
- // Check if OffsetInRecord (the size in bits of the current run) is better
- // as a single field run. When OffsetInRecord has legal integer width, and
- // its
@@ -442,79 +455,235 @@
CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
return;
}
- // Check if OffsetInRecord (the size in bits of the current run) is better
- // as a single field run. When OffsetInRecord has legal integer width, and
- // its
@@ -376,33 +377,41 @@ void CGRecordLowering::lowerUnion(bool isNoUniqueAddress)
{
}
void CGRecordLowering::accumulateFields() {
- for (RecordDecl::field_iterator Field = D->field_begin(),
- FieldEnd = D->field_end();
-Field != FieldEnd;)
urnathan wrote:
Can someone please review this?
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
urnathan wrote:
ping?
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
urnathan wrote:
Rebased
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
llvmbot wrote:
@llvm/pr-subscribers-backend-webassembly
@llvm/pr-subscribers-clang
Author: Nathan Sidwell (urnathan)
Changes
Reimplement the SysV (non-Microsoft) bitfield access unit computation. For
avoidance of doubt, this is not an ABI change, but the memory accesses Clang
emits to ac
urnathan wrote:
> I'm planning to take a closer look at this when I have more time. Sorry I
> haven't been more responsive here.
When might that be?
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.
urnathan wrote:
You knew this ping was coming, right?
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
urnathan wrote:
thanks @yonghong-song!
Taking the opportunity for a ping :)
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
yonghong-song wrote:
With this pull request, I tried to build linux kernel and kernel bpf selftest
and run the selftest, and didn't find any issues. So presumably the
implementation is correct.
I tried a particular kernel bpf selftest:
https://github.com/torvalds/linux/blob/master/tools/tes
urnathan wrote:
> One very brief note: in the comments in the code, you might want to
> distinguish between the semantic width of a bitfield (i.e. the C standard
> notion of a "memory location", which has ABI significance), vs. the accesses
> we choose to generate (we don't need to generate n
efriedma-quic wrote:
I'm planning to take a closer look at this when I have more time. Sorry I
haven't been more responsive here.
One very brief note: in the comments in the code, you might want to distinguish
between the semantic width of a bitfield (i.e. the C standard notion of a
"memory
nigelp-xmos wrote:
XCore: yes the current setting of the new flag, the default false value, is
right for XCore. Misaligned loads can generate a function call. We do not have
benchmarks as such, but comparing the code generated for the test files touched
here, there is an improvement. In partic
urnathan wrote:
time for another ping, I think ...
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
urnathan wrote:
Ping? Anything more I can do to progress this?
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
https://github.com/urnathan edited
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
https://github.com/urnathan ready_for_review
https://github.com/llvm/llvm-project/pull/65742
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits
urnathan wrote:
thanks for your comments,
> When I refer to CSE/DSE, I'm mostly talking about keeping values in registers
> for longer. They don't know anything about individual fields in bitfields. If
> we split bitfields too narrowly, we end up with extra memory accesses for
> accessing mul
efriedma-quic wrote:
> Hm, I've been thinking of it the opposite way round -- merging bitfields is
> throwing away information (about where cuts might be).
In essence, late optimizations can make accesses narrower, but it can't make
them wider. The information about cuts is generally recovera
urnathan wrote:
> The advantage of exposing the wide accesses to the optimizer is that it
> allows memory optimizations, like CSE or DSE, to reason about the entire
> bitfield as a single unit, and easily eliminate accesses. Reducing the size
> of bitfield accesses in the frontend is basically
urnathan wrote:
> > > Note that changing the memory accesses performed by clang (load or store)
> > > _is_ an ABI change at IR level because of UB.
> >
> >
> > Could you clarify why UB is a consideration here?
>
> There's some weirdness with poison values; see
> https://reviews.llvm.org/D129
urnathan wrote:
> Also, please have a look at the existing `-ffine-grained-bitfield-accesses`
> flag and the discussions around it and IPO.
That flag's behaviour is unchanged here -- it continues to not merge access
units. My contention here is that (a) merging units can be advantageous, but
efriedma-quic wrote:
> > Note that changing the memory accesses performed by clang (load or store)
> > _is_ an ABI change at IR level because of UB.
>
> Could you clarify why UB is a consideration here?
There's some weirdness with poison values; see https://reviews.llvm.org/D129541
etc. This
urnathan wrote:
> Note that changing the memory accesses performed by clang (load or store)
> _is_ an ABI change at IR level because of UB.
Could you clarify why UB is a consideration here?
https://github.com/llvm/llvm-project/pull/65742
___
cfe-com
1 - 100 of 113 matches
Mail list logo