[meta-intel] [PATCH 4/4] linux-intel: enable Intel NPU config

2024-05-30 Thread Naveen Saini
Enables Intel NPU (14th generation Intel CPU (Meteor Lake) or newer)
which is a CPU-integrated inference accelerator for
Computer Vision and Deep Learning applications.

Signed-off-by: Naveen Saini 
---
 recipes-kernel/linux/linux-intel_6.6.bb | 3 ++-
 recipes-kernel/linux/linux-intel_6.8.bb | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/recipes-kernel/linux/linux-intel_6.6.bb 
b/recipes-kernel/linux/linux-intel_6.6.bb
index fa2c9053..5c262f5e 100644
--- a/recipes-kernel/linux/linux-intel_6.6.bb
+++ b/recipes-kernel/linux/linux-intel_6.6.bb
@@ -17,6 +17,7 @@ SRCREV_meta ?= "66bebb6789d02e775d4c93d7ca4bf79c2ead4b28"
 
 # Functionality flags
 KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc \
-features/security/security.scc"
+features/security/security.scc \
+features/intel-npu/intel-npu.scc"
 
 UPSTREAM_CHECK_GITTAGREGEX = "^lts-(?Pv6.6.(\d+)-linux-(\d+)T(\d+)Z)$"
diff --git a/recipes-kernel/linux/linux-intel_6.8.bb 
b/recipes-kernel/linux/linux-intel_6.8.bb
index a28c79ae..30343357 100644
--- a/recipes-kernel/linux/linux-intel_6.8.bb
+++ b/recipes-kernel/linux/linux-intel_6.8.bb
@@ -16,6 +16,7 @@ SRCREV_meta ?= "d6379f226f25136d9292f09cd7c11921f0bbcd9b"
 
 # Functionality flags
 KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc \
-features/security/security.scc"
+features/security/security.scc \
+features/intel-npu/intel-npu.scc"
 
 UPSTREAM_CHECK_GITTAGREGEX = 
"^mainline-tracking-v6.7-rc3-linux-(?P(\d+)T(\d+)Z)$"
-- 
2.43.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8344): 
https://lists.yoctoproject.org/g/meta-intel/message/8344
Mute This Topic: https://lists.yoctoproject.org/mt/106384848/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[meta-intel] [PATCH 3/4] linux-intel-rt/6.6: update to tag lts-v6.6.30-rt30-preempt-rt-240520T163730Z

2024-05-30 Thread Naveen Saini
Update kernel-cache too.

Signed-off-by: Naveen Saini 
---
 recipes-kernel/linux/linux-intel-rt_6.6.bb | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/recipes-kernel/linux/linux-intel-rt_6.6.bb 
b/recipes-kernel/linux/linux-intel-rt_6.6.bb
index 342679eb..319918cf 100644
--- a/recipes-kernel/linux/linux-intel-rt_6.6.bb
+++ b/recipes-kernel/linux/linux-intel-rt_6.6.bb
@@ -21,9 +21,9 @@ DEPENDS += "elfutils-native openssl-native util-linux-native"
 
 LINUX_VERSION_EXTENSION ??= "-intel-pk-${LINUX_KERNEL_TYPE}"
 
-LINUX_VERSION ?= "6.6.25"
-SRCREV_machine ?= "f8939454cf9bb7277239bb44e90c99474c599f37"
-SRCREV_meta ?= "c3d1322fb6ff68cdcf4d7a3c1140d81bfdc1320a"
+LINUX_VERSION ?= "6.6.30"
+SRCREV_machine ?= "ffb1894c2ca4fcb0f5a6b59ddb4e25a1124158cc"
+SRCREV_meta ?= "66bebb6789d02e775d4c93d7ca4bf79c2ead4b28"
 
 LINUX_KERNEL_TYPE = "preempt-rt"
 
-- 
2.43.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8343): 
https://lists.yoctoproject.org/g/meta-intel/message/8343
Mute This Topic: https://lists.yoctoproject.org/mt/106384847/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[meta-intel] [PATCH 1/4] linux-intel/6.8: update to tag mainline-tracking-v6.8-linux-240509T064507Z

2024-05-30 Thread Naveen Saini
No need to enable IOMMU explicitly [1]

[1] 
https://git.yoctoproject.org/yocto-kernel-cache/commit/?id=c4e3facab8b3be91a10c99ac66e8c3a4c7696075

Signed-off-by: Naveen Saini 
---
 recipes-kernel/linux/linux-intel_6.8.bb | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/recipes-kernel/linux/linux-intel_6.8.bb 
b/recipes-kernel/linux/linux-intel_6.8.bb
index f2212250..a28c79ae 100644
--- a/recipes-kernel/linux/linux-intel_6.8.bb
+++ b/recipes-kernel/linux/linux-intel_6.8.bb
@@ -11,12 +11,11 @@ DEPENDS += "elfutils-native openssl-native 
util-linux-native"
 LINUX_VERSION_EXTENSION ??= "-mainline-tracking-${LINUX_KERNEL_TYPE}"
 
 LINUX_VERSION ?= "6.8"
-SRCREV_machine ?= "efbae83db36adb946d4f7bbdfda174107cd2"
-SRCREV_meta ?= "27907f391a4fc508da21358b13419c6e86926c34"
+SRCREV_machine ?= "4b78f19d1c451c3738b10d489e67977e97036a7f"
+SRCREV_meta ?= "d6379f226f25136d9292f09cd7c11921f0bbcd9b"
 
 # Functionality flags
 KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc \
-features/security/security.scc \
-features/iommu/iommu.scc"
+features/security/security.scc"
 
 UPSTREAM_CHECK_GITTAGREGEX = 
"^mainline-tracking-v6.7-rc3-linux-(?P(\d+)T(\d+)Z)$"
-- 
2.43.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8341): 
https://lists.yoctoproject.org/g/meta-intel/message/8341
Mute This Topic: https://lists.yoctoproject.org/mt/106384845/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[meta-intel] [PATCH 2/4] linux-intel/6.6: update to tag lts-v6.6.30-linux-240517T123905Z

2024-05-30 Thread Naveen Saini
No need to enable IOMMU explicitly [1]

[1] 
https://git.yoctoproject.org/yocto-kernel-cache/commit/?h=yocto-6.6=49698cadd79745fa26aa7ef507c16902250c1750

Signed-off-by: Naveen Saini 
---
 recipes-kernel/linux/linux-intel_6.6.bb | 9 -
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/recipes-kernel/linux/linux-intel_6.6.bb 
b/recipes-kernel/linux/linux-intel_6.6.bb
index 6c7aab17..fa2c9053 100644
--- a/recipes-kernel/linux/linux-intel_6.6.bb
+++ b/recipes-kernel/linux/linux-intel_6.6.bb
@@ -11,13 +11,12 @@ DEPENDS += "elfutils-native openssl-native 
util-linux-native"
 
 LINUX_VERSION_EXTENSION ??= "-intel-pk-${LINUX_KERNEL_TYPE}"
 
-LINUX_VERSION ?= "6.6.25"
-SRCREV_machine ?= "lts-v6.6.25-linux-240415T215440Z"
-SRCREV_meta ?= "c3d1322fb6ff68cdcf4d7a3c1140d81bfdc1320a"
+LINUX_VERSION ?= "6.6.30"
+SRCREV_machine ?= "86a43fc66c95e24b7cc9e3adf2f4874b589bf9d5"
+SRCREV_meta ?= "66bebb6789d02e775d4c93d7ca4bf79c2ead4b28"
 
 # Functionality flags
 KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc \
-features/security/security.scc \
-features/iommu/iommu.scc"
+features/security/security.scc"
 
 UPSTREAM_CHECK_GITTAGREGEX = "^lts-(?Pv6.6.(\d+)-linux-(\d+)T(\d+)Z)$"
-- 
2.43.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8342): 
https://lists.yoctoproject.org/g/meta-intel/message/8342
Mute This Topic: https://lists.yoctoproject.org/mt/106384846/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[jira] [Created] (HIVE-28286) Add filtering support for get_table_metas API in Hive metastore

2024-05-29 Thread Naveen Gangam (Jira)
Naveen Gangam created HIVE-28286:


 Summary: Add filtering support for get_table_metas API in Hive 
metastore
 Key: HIVE-28286
 URL: https://issues.apache.org/jira/browse/HIVE-28286
 Project: Hive
  Issue Type: Bug
  Components: Standalone Metastore
Affects Versions: 4.0.0
Reporter: Naveen Gangam
Assignee: Naveen Gangam


Hive Metastore has support for filtering objects thru the plugin authorizer for 
some APIs like getTables(), getDatabases(), getDataConnectors() etc. However, 
the same should be done for the get_table_metas() API call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Node requires maintenance, non-empty set of maintainance tasks is found - node is not coming up

2024-05-29 Thread Naveen Kumar
Thanks very much for your prompt response Gianluca

just for the community, I could solve this by running the control.sh with
reset lost partitions for individual cachereset_lost_partitions
looks like it worked, those partition issue is resolved, I suppose there
wouldnt be any data loss as we have set all our caches with 2 replicas

coming to the node which was not getting added to the cluster earlier,
removed from baseline --> cleared all persistence store --> brought up the
node --> added the node to baseline, this also seems to have worked fine.

Thanks


On Wed, May 29, 2024 at 5:13 PM Gianluca Bonetti 
wrote:

> Hello Naveen
>
> Apache Ignite 2.13 is more than 2 years old, 25 months old in actual fact.
> Three bugfix releases had been rolled out over time up to 2.16 release.
>
> It seems you are restarting your cluster on a regular basis, so you'd
> better upgrade to 2.16 as soon as possible.
> Otherwise it will also be very difficult for people on a community based
> mailing list, on volunteer time, to work out a solution with a 2 years old
> version running.
>
> Besides that, you are not providing very much information about your
> cluster setup.
> How many nodes, what infrastructure, how many caches, overall data size.
> One could only guess you have more than 1 node running, with at least 1
> cache, and non-empty dataset. :)
>
> This document from GridGain may be helpful but I don't see the same for
> Ignite, it may still be worth checking it out.
>
> https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/maintenance-mode
>
> On the other hand you should also check your failing node.
> If it is always the same node failing, then there should be some root
> cause apart from Ignite.
> Indeed if the nodes configuration is the same across all nodes, and just
> this one fails, you should also consider some network issues (check
> connectivity and network latency between nodes) and hardware related issues
> (faulty disks, faulty memory)
> In the end, one option might be to replace the faulty machine with a brand
> new one.
> In cloud environments this is actually quite cheap and easy to do.
>
> Cheers
> Gianluca
>
> On Wed, 29 May 2024 at 08:43, Naveen Kumar 
> wrote:
>
>> Hello All
>>
>> We are using Ignite 2.13.0
>>
>> After a cluster restart, one of the node is not coming up and in node
>> logs are seeing this error - Node requires maintenance, non-empty set of
>> maintainance  tasks is found - node is not coming up
>>
>> we are getting errors like time out is reached before computation is
>> completed error in other nodes as well.
>>
>> I could see that, we have control.sh script to backup and clean up the
>> corrupted files, but when I run the command, it fails.
>>
>> I have removed the node from baseline and tried to run as well, still its
>> failing
>>
>> what could be the solution for this, cluster is functioning,
>> however there are requests failing
>>
>> Is there anyway we can start ignite node in  maintenance mode and try
>> running clean corrupted commands
>>
>> Thanks
>> Naveen
>>
>>
>>

-- 
Thanks & Regards,
Naveen Bandaru


Node requires maintenance, non-empty set of maintainance tasks is found - node is not coming up

2024-05-29 Thread Naveen Kumar
Hello All

We are using Ignite 2.13.0

After a cluster restart, one of the node is not coming up and in node logs
are seeing this error - Node requires maintenance, non-empty set of
maintainance  tasks is found - node is not coming up

we are getting errors like time out is reached before computation is
completed error in other nodes as well.

I could see that, we have control.sh script to backup and clean up the
corrupted files, but when I run the command, it fails.

I have removed the node from baseline and tried to run as well, still its
failing

what could be the solution for this, cluster is functioning, however there
are requests failing

Is there anyway we can start ignite node in  maintenance mode and try
running clean corrupted commands

Thanks
Naveen


[meta-intel] [PATCH] linux-npu-driver: add recipe

2024-05-28 Thread Naveen Saini
This recipe enables User Mode Driver for Intel® NPU device.
Intel® NPU device is an AI inference accelerator integrated
with Intel client CPUs, starting from Intel® Core™ Ultra generation
of CPUs (formerly known as Meteor Lake).
It enables energy-efficient execution of artificial neural network tasks.

https://github.com/intel/linux-npu-driver

Signed-off-by: Naveen Saini 
---
 ...ilation-warning-when-using-gcc-13-25.patch |  99 
 ...-Fix-compilation-failure-with-GCC-14.patch | 110 ++
 .../linux-npu-driver_1.2.0.bb |  33 ++
 3 files changed, 242 insertions(+)
 create mode 100644 
dynamic-layers/openembedded-layer/recipes-core/linux-npu-driver/linux-npu-driver/0001-Fix-the-compilation-warning-when-using-gcc-13-25.patch
 create mode 100644 
dynamic-layers/openembedded-layer/recipes-core/linux-npu-driver/linux-npu-driver/0002-Fix-compilation-failure-with-GCC-14.patch
 create mode 100644 
dynamic-layers/openembedded-layer/recipes-core/linux-npu-driver/linux-npu-driver_1.2.0.bb

diff --git 
a/dynamic-layers/openembedded-layer/recipes-core/linux-npu-driver/linux-npu-driver/0001-Fix-the-compilation-warning-when-using-gcc-13-25.patch
 
b/dynamic-layers/openembedded-layer/recipes-core/linux-npu-driver/linux-npu-driver/0001-Fix-the-compilation-warning-when-using-gcc-13-25.patch
new file mode 100644
index ..2748d7ab
--- /dev/null
+++ 
b/dynamic-layers/openembedded-layer/recipes-core/linux-npu-driver/linux-npu-driver/0001-Fix-the-compilation-warning-when-using-gcc-13-25.patch
@@ -0,0 +1,99 @@
+From b57297c14d94dac9bdef7570b7b33d70b10171f3 Mon Sep 17 00:00:00 2001
+From: Jozef Wludzik 
+Date: Tue, 26 Mar 2024 14:43:29 +0100
+Subject: [PATCH 1/2] Fix the compilation warning when using gcc-13 (#25)
+
+Added missing headers. Fixed compilation error about casting from
+unsigned to signed int.
+
+Upstream-Status: Backport 
[https://github.com/intel/linux-npu-driver/commit/4bcbf2abe94eb4d9c083bd616b58e309a82d008a]
+
+Signed-off-by: Jozef Wludzik 
+Signed-off-by: Naveen Saini 
+---
+ umd/level_zero_driver/ext/source/graph/vcl_symbols.hpp | 7 ---
+ umd/vpu_driver/include/umd_common.hpp  | 1 +
+ validation/umd-test/umd_prime_buffers.h| 9 +++--
+ validation/umd-test/utilities/data_handle.h| 1 +
+ 4 files changed, 13 insertions(+), 5 deletions(-)
+
+diff --git a/umd/level_zero_driver/ext/source/graph/vcl_symbols.hpp 
b/umd/level_zero_driver/ext/source/graph/vcl_symbols.hpp
+index f206ebe..682e5b4 100644
+--- a/umd/level_zero_driver/ext/source/graph/vcl_symbols.hpp
 b/umd/level_zero_driver/ext/source/graph/vcl_symbols.hpp
+@@ -5,12 +5,13 @@
+  *
+  */
+ 
+-#include 
+-#include 
+-
+ #include "vpux_driver_compiler.h"
+ #include "vpu_driver/source/utilities/log.hpp"
+ 
++#include 
++#include 
++#include 
++
+ class Vcl {
+   public:
+ static Vcl () {
+diff --git a/umd/vpu_driver/include/umd_common.hpp 
b/umd/vpu_driver/include/umd_common.hpp
+index 0c874a3..5ad9be2 100644
+--- a/umd/vpu_driver/include/umd_common.hpp
 b/umd/vpu_driver/include/umd_common.hpp
+@@ -7,6 +7,7 @@
+ 
+ #pragma once
+ 
++#include 
+ #include 
+ #include 
+ #include 
+diff --git a/validation/umd-test/umd_prime_buffers.h 
b/validation/umd-test/umd_prime_buffers.h
+index 6f7c7de..ab4814c 100644
+--- a/validation/umd-test/umd_prime_buffers.h
 b/validation/umd-test/umd_prime_buffers.h
+@@ -6,12 +6,17 @@
+  */
+ 
+ #pragma once
++
++#include "umd_test.h"
++
+ #include 
+-#include 
+ #include 
+ #include 
++#include 
++#include 
+ #include 
+ #include 
++#include 
+ 
+ #define ALLIGN_TO_PAGE(x) __ALIGN_KERNEL((x), (UmdTest::PAGE_SIZE))
+ 
+@@ -60,7 +65,7 @@ class PrimeBufferHelper {
+ return false;
+ 
+ bufferFd = heapAlloc.fd;
+-buffers.insert({heapAlloc.fd, {size, nullptr}});
++buffers.insert({static_cast(heapAlloc.fd), {size, nullptr}});
+ return true;
+ }
+ 
+diff --git a/validation/umd-test/utilities/data_handle.h 
b/validation/umd-test/utilities/data_handle.h
+index d6e0ec0..5d937b2 100644
+--- a/validation/umd-test/utilities/data_handle.h
 b/validation/umd-test/utilities/data_handle.h
+@@ -6,6 +6,7 @@
+  */
+ 
+ #include 
++#include 
+ #include 
+ #include 
+ 
+-- 
+2.43.0
+
diff --git 
a/dynamic-layers/openembedded-layer/recipes-core/linux-npu-driver/linux-npu-driver/0002-Fix-compilation-failure-with-GCC-14.patch
 
b/dynamic-layers/openembedded-layer/recipes-core/linux-npu-driver/linux-npu-driver/0002-Fix-compilation-failure-with-GCC-14.patch
new file mode 100644
index ..9fb97354
--- /dev/null
+++ 
b/dynamic-layers/openembedded-layer/recipes-core/linux-npu-driver/linux-npu-driver/0002-Fix-compilation-failure-with-GCC-14.patch
@@ -0,0 +1,110 @@
+From a9f51fd88effb7d324609e692ca7da576d6dad2e Mon Sep 17 00:00:00 2001
+From: Naveen Saini 
+Date: Tue, 28 May 2024 10:23:42 +0800
+Subject: [PATCH 2/2] Fix compilation failure with GCC-14
+
+umd/level

Re: [linux-yocto] [kernel-cache][PATCH] features/intel-npu: introduce Intel NPU fragment

2024-05-26 Thread Naveen Saini
Hi Bruce.  I missed your reply.

Pls merge it in both 6.6 and master.

Regards,
Naveen

> -Original Message-
> From: linux-yocto@lists.yoctoproject.org  yo...@lists.yoctoproject.org> On Behalf Of Bruce Ashfield
> Sent: Thursday, May 16, 2024 10:23 PM
> To: Saini, Naveen Kumar 
> Cc: linux-yocto@lists.yoctoproject.org
> Subject: Re: [linux-yocto] [kernel-cache][PATCH] features/intel-npu:
> introduce Intel NPU fragment
> 
> Which branches were to looking at for this ?
> 
> 6.6 and master ?
> 
> just master ?
> 
> Bruce
> 
> In message: [kernel-cache][PATCH] features/intel-npu: introduce Intel NPU
> fragment on 15/05/2024 Naveen Saini wrote:
> 
> > Add config fragment for the  system with an 14th generation Intel CPU
> > (Meteor Lake) or newer. It will allow users to enable Intel NPU
> > (formerly called Intel VPU) which is a CPU-integrated inference
> > accelerator for Computer Vision and Deep Learning applications.
> >
> > Signed-off-by: Naveen Saini 
> > ---
> >  features/intel-npu/intel-npu.cfg | 3 +++
> > features/intel-npu/intel-npu.scc | 4 
> >  kern-features.rc | 1 +
> >  3 files changed, 8 insertions(+)
> >  create mode 100644 features/intel-npu/intel-npu.cfg  create mode
> > 100644 features/intel-npu/intel-npu.scc
> >
> > diff --git a/features/intel-npu/intel-npu.cfg
> > b/features/intel-npu/intel-npu.cfg
> > new file mode 100644
> > index ..6b7ced30
> > --- /dev/null
> > +++ b/features/intel-npu/intel-npu.cfg
> > @@ -0,0 +1,3 @@
> > +# SPDX-License-Identifier: MIT
> > +CONFIG_DRM_ACCEL=y
> > +CONFIG_DRM_ACCEL_IVPU=m
> > diff --git a/features/intel-npu/intel-npu.scc
> > b/features/intel-npu/intel-npu.scc
> > new file mode 100644
> > index ..782c8499
> > --- /dev/null
> > +++ b/features/intel-npu/intel-npu.scc
> > @@ -0,0 +1,4 @@
> > +# SPDX-License-Identifier: MIT
> > +define KFEATURE_DESCRIPTION "Enable Intel NPU for Computer Vision
> and Deep Learning applications"
> > +
> > +kconf hardware intel-npu.cfg
> > diff --git a/kern-features.rc b/kern-features.rc index
> > 0e83053c..14381cc8 100644
> > --- a/kern-features.rc
> > +++ b/kern-features.rc
> > @@ -72,6 +72,7 @@
> > config = features/lxc/lxc-enable.scc
> > config = features/inline/inline.scc
> > config = features/intel-tco/intel-tco.scc
> > +   config = features/intel-npu/intel-npu.scc
> > config = features/ftrace/ftrace-function-tracer-disable.scc
> > config = features/ftrace/ftrace.scc
> > config = features/vxlan/vxlan-enable.scc
> > --
> > 2.37.3
> >

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13956): 
https://lists.yoctoproject.org/g/linux-yocto/message/13956
Mute This Topic: https://lists.yoctoproject.org/mt/106109710/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



View Support for Hive Catalog

2024-05-24 Thread Naveen Kumar
Hi Everyone,

As part of the issue-8698 <https://github.com/apache/iceberg/issues/8698>,
I was working on this PR <https://github.com/apache/iceberg/pull/9852>.
Since Hive has a different complexity compared to other catalogs. It would
be really helpful if one of the Hive related people looked into this. In
the early discussion, Peter and Szehon have already shared their thoughts.

Please take a look and let me know your comments.

Issue:https://github.com/apache/iceberg/issues/8698
PR: https://github.com/apache/iceberg/pull/9852

Regards,
Naveen Kumar


Re: [Discuss] Heap pressure with RewriteFiles APIs

2024-05-24 Thread Naveen Kumar
Hi Amogh,

Thanks for your feedback.
It really sounds like a good idea to me. For all the heavy operations like
compaction, gc it can reduce heap pressure.

However, can't we do something  where we don't need to save all the
dataFiles? Especially for Rewrite cases, what will be harm if we flush to
manifest as soon as possible. At commit time we should only lookout for the
created manifests and save it to the new snapshot.

Please share your thoughts.

Thanks,
Naveen Kumar



On Wed, May 22, 2024 at 9:51 PM Amogh Jahagirdar  wrote:

> I'd think chunking the work as much as possible, and disabling metrics for
> columns where they're not helpful probably goes far but perhaps may be
> insufficient for extreme cases.
> I've also been thinking about if there are better space-efficient data
> structures for maintaining file paths which exploit the fact that there's a
> common location prefix for the files. Specifically, I was thinking of radix
> trees (compressed tries) https://en.wikipedia.org/wiki/Radix_tree.
>
> For example if the file paths are all
> "s3:/data//file.parquet";
> a normal hashset when the set is very large is just going to have
> many files, many of which repeat the same prefix. With a radix tree, file
> paths should (in theory) consume significantly less memory, because there'd
> be a single representation of
> "s3:/data/"
> instead of a million times in the case of a million data files.
>
> In Iceberg, we probably wouldn't really be leveraging the efficient prefix
> lookups that this data structure provides since we don't really need that
> operation, but it's lookups on keys should be as good as a hashset with the
> additional benefit of the reduced memory consumption due to exploiting the
> nature of these file paths. I've played around with this idea in the remove
> orphan files procedure https://github.com/apache/iceberg/pull/10229, but
> still need to collect data points on the benefits. I also plan on writing a
> benchmark which will generate a bunch of these files and use
> instrumentation to see the memory consumption.
>
> Thanks,
>
> Amogh Jahagirdar
>
>
>
> On Wed, May 22, 2024 at 1:15 AM Naveen Kumar  wrote:
>
>> Hi Szehon,
>>
>> Thanks for your email.
>>
>> I agree configuring metadata metrics per column will create a smaller
>> manifest file with lower and upper bounds per content entry. Assuming your 
>> patch
>> <https://github.com/apache/iceberg/pull/2608>is merged, it will works as
>> following:
>>
>>1. A user should identify all the columns on which pruning is not
>>needed.
>>2. Updated the table properties and disabled metrics on those columns.
>>3. Run repair manifests <https://github.com/apache/iceberg/pull/2608>.
>>4. Run compaction now.
>>
>> However this will still not solve the original problem where *Set,
>> Set  *has grown to a significantly big number(say 1M). This
>> might be very rare but I have seen examples where a user never ran
>> compaction and after adding a new partition column they are trying to
>> compact the entire table.
>>
>> Is this a valid use case? WDYT?
>>
>> Regards,
>> Naveen Kumar
>>
>>
>>
>> On Tue, May 21, 2024 at 10:47 PM Szehon Ho 
>> wrote:
>>
>>> Hi Naveen
>>>
>>> Yes it sounds like it will help to disable metrics for those columns?
>>> Iirc, by default it manifest entries have metrics at 'truncate(16)' level
>>> for 100 columns, which as you see can be quite memory intensive.  A
>>> potential improvement later also is to have the ability to remove counts by
>>> config, though need to confirm if that is feasible.
>>>
>>> Unfortunately today the new metrics config will only apply to new data
>>> files  (you have to rewrite them all, or otherwise phase old data files
>>> out).  I had a patch awhile back to add support for rewriting just manifest
>>> with new metric config but was not merged yet, if any reviewer has time to
>>> review, I can work on it again.
>>> https://github.com/apache/iceberg/pull/2608
>>> <https://github.com/apache/iceberg/pull/2608>
>>>
>>> Thanks
>>> Szehon
>>>
>>> On Tue, May 21, 2024 at 1:43 AM Naveen Kumar  wrote:
>>>
>>>> Hi Everyone,
>>>>
>>>> I am looking into RewriteFiles
>>>> <https://github.com/apache/iceberg/blob/8d6bee736884575da7368e0963268d1cbe362d90/api/src/main/java/org/apache/iceberg/RewriteFiles.java>
>>>> APIs and its implementation BaseRewriteFiles
>>>> <https://github.com/ap

Re: [meta-intel] Upgrading from #kirkstone to #scarthgap image-installer.wks.in not working

2024-05-22 Thread Naveen Saini
Have you tried this change !
https://git.yoctoproject.org/meta-intel/commit/?id=91ff1977d641a054b44d0c9a40a400fecb777bd6

With this change, we do not see any build break.

Regards,
Naveen

From: meta-intel@lists.yoctoproject.org  On 
Behalf Of Michael Lynch
Sent: Thursday, May 23, 2024 5:13 AM
To: meta-intel@lists.yoctoproject.org
Subject: [meta-intel] Upgrading from #kirkstone to #scarthgap 
image-installer.wks.in not working

Hello all, I hope this is the right place to ask this.  I'm in the process of 
upgrading from kirkstone to scarthgap and everything was going smoothly until I 
got to image (genericx86-64 based) creation.  I'm creating a self installer 
boot image using image-installer.wks.in (more precisely a copy of it) from 
meta-intel and it is failing with the following error:
ERROR: _exec_cmd: install -m 0644 -D 
/home/mlynch/SGTest/build/SynergyII/Controller/intel/tmp/work/genericx86_64-poky-linux/synergy2-image/1.0/deploy-synergy2-image-image-complete/synergy2-image-genericx86-64.ext4
 
/home/mlynch/SGTest/build/SynergyII/Controller/intel/tmp/work/genericx86_64-poky-linux/synergy2-image/1.0/tmp-wic/boot.2/rootfs.img
 returned '1' instead of 0
output: install: cannot stat 
'/home/mlynch/SGTest/build/SynergyII/Controller/intel/tmp/work/genericx86_64-poky-linux/synergy2-image/1.0/deploy-synergy2-image-image-complete/synergy2-image-genericx86-64.ext4':
 No such file or directory

When I check, the file it is complaining about does not exist.  I've tracked 
this down to being caused by the highlighted portion of the line below from 
image-installer.wks.in:
part /boot --source bootimg-efi 
--sourceparams="loader=${EFI_PROVIDER},title=install,label=install-efi,initrd=microcode.cpio;${INITRD_IMAGE_LIVE}-${MACHINE}.${INITRAMFS_FSTYPES}"
 --ondisk sda --label install --active --align 1024 --use-uuid

If I remove either the semicolon AND either microcode.cpio or 
${INIT_IMAGE_LIVE}-${MACHINE}.${INITRAMGS_FSTYPES} the missing file will get 
created and a WIC file is produced (but won't boot).  E.G.  If I changed the 
line in one of the two ways shown below the error does not occur and a WIC file 
is produced but it will not boot.  As I stated I am upgrading from kirkstone to 
scarthgap and the above line works with the kirkstone branch.  Note that I am 
working from a fresh install of poky (scarthgap) and using the same machine I 
use for the kirkstone branch.  Given that the image-installer.wks.in file from 
meta-intel hasn't really changed but no longer appears to work, I'm assuming 
something outside of the layer changed that is responsible for the issue.  It 
appears that the existence of the semicolon in the initrd= portion of the part 
command is causing the issue.  With the working version my grub.cfg has a line 
that reads "initrd /microcode.cpio 
/synergy2-image-initramfs-genericx86-64.cpio.gz" and in the failed case it does 
not.  Thanks in advance for any guidance anyone can offer even if that's just 
pointing me in a direction to look.

part /boot --source bootimg-efi 
--sourceparams="loader=${EFI_PROVIDER},title=install,label=install-efi,initrd=${INITRD_IMAGE_LIVE}-${MACHINE}.${INITRAMFS_FSTYPES}"
 --ondisk sda --label install --active --align 1024 --use-uuid

part /boot --source bootimg-efi 
--sourceparams="loader=${EFI_PROVIDER},title=install,label=install-efi,initrd=microcode.cpio"
 --ondisk sda --label install --active --align 1024 --use-uuid
-- Mike

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8332): 
https://lists.yoctoproject.org/g/meta-intel/message/8332
Mute This Topic: https://lists.yoctoproject.org/mt/106251482/21656
Mute 
#kirkstone:https://lists.yoctoproject.org/g/meta-intel/mutehashtag/kirkstone
Mute 
#scarthgap:https://lists.yoctoproject.org/g/meta-intel/mutehashtag/scarthgap
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [Discuss] Heap pressure with RewriteFiles APIs

2024-05-22 Thread Naveen Kumar
Hi Szehon,

Thanks for your email.

I agree configuring metadata metrics per column will create a smaller
manifest file with lower and upper bounds per content entry. Assuming
your patch
<https://github.com/apache/iceberg/pull/2608>is merged, it will works as
following:

   1. A user should identify all the columns on which pruning is not needed.
   2. Updated the table properties and disabled metrics on those columns.
   3. Run repair manifests <https://github.com/apache/iceberg/pull/2608>.
   4. Run compaction now.

However this will still not solve the original problem where *Set,
Set  *has grown to a significantly big number(say 1M). This
might be very rare but I have seen examples where a user never ran
compaction and after adding a new partition column they are trying to
compact the entire table.

Is this a valid use case? WDYT?

Regards,
Naveen Kumar



On Tue, May 21, 2024 at 10:47 PM Szehon Ho  wrote:

> Hi Naveen
>
> Yes it sounds like it will help to disable metrics for those columns?
> Iirc, by default it manifest entries have metrics at 'truncate(16)' level
> for 100 columns, which as you see can be quite memory intensive.  A
> potential improvement later also is to have the ability to remove counts by
> config, though need to confirm if that is feasible.
>
> Unfortunately today the new metrics config will only apply to new data
> files  (you have to rewrite them all, or otherwise phase old data files
> out).  I had a patch awhile back to add support for rewriting just manifest
> with new metric config but was not merged yet, if any reviewer has time to
> review, I can work on it again.
> https://github.com/apache/iceberg/pull/2608
> <https://github.com/apache/iceberg/pull/2608>
>
> Thanks
> Szehon
>
> On Tue, May 21, 2024 at 1:43 AM Naveen Kumar  wrote:
>
>> Hi Everyone,
>>
>> I am looking into RewriteFiles
>> <https://github.com/apache/iceberg/blob/8d6bee736884575da7368e0963268d1cbe362d90/api/src/main/java/org/apache/iceberg/RewriteFiles.java>
>> APIs and its implementation BaseRewriteFiles
>> <https://github.com/apache/iceberg/blob/8d6bee736884575da7368e0963268d1cbe362d90/core/src/main/java/org/apache/iceberg/BaseRewriteFiles.java>.
>> Currently this works as following:
>>
>>1. It accumulates all the files for addition and deletions.
>>2. At time of commit, it creates a new snapshot after adding all the
>>entries to corresponding manifest files.
>>
>> It has been observed that if the accumulated file objects are of huge
>> size it takes a lot of memory.
>> *eg*: Each dataFile object is of size *1KB*. Total accumulated(additions
>> or deletions) size is *1 million. *
>> Total memory consumed by *RewriteFiles* will be around *1GB*.
>>
>> Such dataset can happen with following reasons:
>>
>>1. Table is very wide with say 1000 columns.
>>2. Most of the columns are of String data type, which can take more
>>space to store lower bound and upper bound.
>>3. Table has billions of records with millions of data files.
>>4. It is running data compaction procedures/jobs for the first time.
>>5. Or, Table was UN-partitioned and later evolved by new partition
>>columns.
>>6. Now it is trying to compact the table
>>
>> Attaching heap dump from one of the dataset while using API
>>
>>> RewriteFiles rewriteFiles(
>>> Set removedDataFiles,
>>> Set removedDeleteFiles,
>>> Set addedDataFiles,
>>> Set addedDeleteFiles)
>>>
>>>
>> [image: Screenshot 2024-01-11 at 10.01.54 PM.png]
>> We do have properties like PARTIAL_PROGRESS_ENABLED_DEFAULT
>> <https://github.com/apache/iceberg/blob/8d6bee736884575da7368e0963268d1cbe362d90/api/src/main/java/org/apache/iceberg/actions/RewriteDataFiles.java#L45C11-L45C43>,
>> which helps create smaller groups and multiple commits with configuration
>> PARTIAL_PROGRESS_MAX_COMMITS_DEFAULT
>> <https://github.com/apache/iceberg/blob/8d6bee736884575da7368e0963268d1cbe362d90/api/src/main/java/org/apache/iceberg/actions/RewriteDataFiles.java#L53C7-L53C43>.
>> Currently engines like SPARK can follow this strategy. Since SPARK is
>> running all the compaction jobs concurrently, there are chances many jobs
>> can land on the same machines and accumulate with high memory usage.
>>
>> My question is, can we make these implementations
>> <https://github.com/apache/iceberg/blob/8d6bee736884575da7368e0963268d1cbe362d90/api/src/main/java/org/apache/iceberg/actions/RewriteDataFiles.java#L53C7-L53C43>better
>> to avoid any heap pressure? Also, has someone encountered similar issues
>> and if so how did they fix it?
>>
>> Regards,
>> Naveen Kumar
>>
>>


[Discuss] Heap pressure with RewriteFiles APIs

2024-05-21 Thread Naveen Kumar
Hi Everyone,

I am looking into RewriteFiles
<https://github.com/apache/iceberg/blob/8d6bee736884575da7368e0963268d1cbe362d90/api/src/main/java/org/apache/iceberg/RewriteFiles.java>
APIs and its implementation BaseRewriteFiles
<https://github.com/apache/iceberg/blob/8d6bee736884575da7368e0963268d1cbe362d90/core/src/main/java/org/apache/iceberg/BaseRewriteFiles.java>.
Currently this works as following:

   1. It accumulates all the files for addition and deletions.
   2. At time of commit, it creates a new snapshot after adding all the
   entries to corresponding manifest files.

It has been observed that if the accumulated file objects are of huge size
it takes a lot of memory.
*eg*: Each dataFile object is of size *1KB*. Total accumulated(additions or
deletions) size is *1 million. *
Total memory consumed by *RewriteFiles* will be around *1GB*.

Such dataset can happen with following reasons:

   1. Table is very wide with say 1000 columns.
   2. Most of the columns are of String data type, which can take more
   space to store lower bound and upper bound.
   3. Table has billions of records with millions of data files.
   4. It is running data compaction procedures/jobs for the first time.
   5. Or, Table was UN-partitioned and later evolved by new partition
   columns.
   6. Now it is trying to compact the table

Attaching heap dump from one of the dataset while using API

> RewriteFiles rewriteFiles(
> Set removedDataFiles,
> Set removedDeleteFiles,
> Set addedDataFiles,
> Set addedDeleteFiles)
>
>
[image: Screenshot 2024-01-11 at 10.01.54 PM.png]
We do have properties like PARTIAL_PROGRESS_ENABLED_DEFAULT
<https://github.com/apache/iceberg/blob/8d6bee736884575da7368e0963268d1cbe362d90/api/src/main/java/org/apache/iceberg/actions/RewriteDataFiles.java#L45C11-L45C43>,
which helps create smaller groups and multiple commits with configuration
PARTIAL_PROGRESS_MAX_COMMITS_DEFAULT
<https://github.com/apache/iceberg/blob/8d6bee736884575da7368e0963268d1cbe362d90/api/src/main/java/org/apache/iceberg/actions/RewriteDataFiles.java#L53C7-L53C43>.
Currently engines like SPARK can follow this strategy. Since SPARK is
running all the compaction jobs concurrently, there are chances many jobs
can land on the same machines and accumulate with high memory usage.

My question is, can we make these implementations
<https://github.com/apache/iceberg/blob/8d6bee736884575da7368e0963268d1cbe362d90/api/src/main/java/org/apache/iceberg/actions/RewriteDataFiles.java#L53C7-L53C43>better
to avoid any heap pressure? Also, has someone encountered similar issues
and if so how did they fix it?

Regards,
Naveen Kumar


Re: [Intel-wired-lan] [PATCH iwl-net] ice: implement AQ download pkg retry

2024-05-17 Thread Naveen Mamindlapalli


> -Original Message-
> From: Wojciech Drewek 
> Sent: Thursday, May 16, 2024 7:34 PM
> To: net...@vger.kernel.org
> Cc: intel-wired-...@lists.osuosl.org
> Subject: [PATCH iwl-net] ice: implement AQ download pkg retry
> 
> ice_aqc_opc_download_pkg (0x0C40) AQ sporadically returns error due to FW
> issue. Fix this by retrying five times before moving to Safe Mode.
> 
> Reviewed-by: Michal Swiatkowski 
> Signed-off-by: Wojciech Drewek 
> ---
>  drivers/net/ethernet/intel/ice/ice_ddp.c | 19 +--
>  1 file changed, 17 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c
> b/drivers/net/ethernet/intel/ice/ice_ddp.c
> index ce5034ed2b24..19e2111fcf08 100644
> --- a/drivers/net/ethernet/intel/ice/ice_ddp.c
> +++ b/drivers/net/ethernet/intel/ice/ice_ddp.c
> @@ -1339,6 +1339,7 @@ ice_dwnld_cfg_bufs_no_lock(struct ice_hw *hw, struct
> ice_buf *bufs, u32 start,
> 
>   for (i = 0; i < count; i++) {
>   bool last = false;
> + int try_cnt = 0;
>   int status;
> 
>   bh = (struct ice_buf_hdr *)(bufs + start + i); @@ -1346,8
> +1347,22 @@ ice_dwnld_cfg_bufs_no_lock(struct ice_hw *hw, struct ice_buf
> *bufs, u32 start,
>   if (indicate_last)
>   last = ice_is_last_download_buffer(bh, i, count);
> 
> - status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE,
> last,
> -  , , NULL);
> + while (try_cnt < 5) {
> + status = ice_aq_download_pkg(hw, bh,
> ICE_PKG_BUF_SIZE,
> +  last, , ,
> +  NULL);
> + if (hw->adminq.sq_last_status != ICE_AQ_RC_ENOSEC
> &&
> + hw->adminq.sq_last_status != ICE_AQ_RC_EBADSIG)
> + break;
> +
> + try_cnt++;
> + msleep(20);
> + }
> +
> + if (try_cnt)
> + dev_dbg(ice_hw_to_dev(hw),
> + "ice_aq_download_pkg failed, number of retries:
> %d\n",
> + try_cnt);

Do you really need this dbg statement when try_cnt < 5? Is it not misleading in 
success case (with retries)?

Thanks,
Naveen

> 
>   /* Save AQ status from download package */
>   if (status) {
> --
> 2.40.1
> 



[meta-intel] [PATCH] intel-microcode: upgrade 20240312 -> 20240514

2024-05-16 Thread Naveen Saini
Release notes:
https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files/releases/tag/microcode-20240514

Fixes CVEs:
CVE-2023-45733 
[https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-01051.html]
CVE-2023-46103 
[https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-01052.html]
CVE-2023-45745,CVE-2023-47855 
[https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-01036.html]

Signed-off-by: Naveen Saini 
---
 ...{intel-microcode_20240312.bb => intel-microcode_20240514.bb} | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
 rename recipes-core/microcode/{intel-microcode_20240312.bb => 
intel-microcode_20240514.bb} (97%)

diff --git a/recipes-core/microcode/intel-microcode_20240312.bb 
b/recipes-core/microcode/intel-microcode_20240514.bb
similarity index 97%
rename from recipes-core/microcode/intel-microcode_20240312.bb
rename to recipes-core/microcode/intel-microcode_20240514.bb
index 00b18231..d73b892c 100644
--- a/recipes-core/microcode/intel-microcode_20240312.bb
+++ b/recipes-core/microcode/intel-microcode_20240514.bb
@@ -16,7 +16,7 @@ LIC_FILES_CHKSUM = 
"file://license;md5=d8405101ec6e90c1d84b082b0c40c721"
 SRC_URI = 
"git://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files.git;protocol=https;branch=main
 \
"
 
-SRCREV = "41af34500598418150aa298bb04e7edacc547897"
+SRCREV = "27ace91db4c06b251f6935343c31a5fa4a65cf22"
 
 DEPENDS = "iucode-tool-native"
 S = "${WORKDIR}/git"
-- 
2.43.0


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8327): 
https://lists.yoctoproject.org/g/meta-intel/message/8327
Mute This Topic: https://lists.yoctoproject.org/mt/106147602/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [VOTE] Mark Hive 2.x EOL

2024-05-15 Thread Naveen Gangam
+1 Totally

On Tue, May 14, 2024 at 3:30 AM Zoltán Rátkai 
wrote:

> +1 (non-binding)
>
> Regards,
>
> Zoltan Ratkai
>
> On Tue, May 14, 2024 at 5:42 AM Sourabh Badhya
>  wrote:
>
>> +1 (non-binding)
>>
>> Regards,
>> Sourabh Badhya
>>
>> On Mon, May 13, 2024 at 10:31 PM Krisztian Kasa
>>  wrote:
>>
>>> +1 (binding)
>>>
>>> On Mon, May 13, 2024 at 4:55 PM Okumin  wrote:
>>>
 +1 (non-binding)

 I appreciate the community's efforts in maintaining 2.x for so long.

 Thanks,
 Okumin

 On Sat, May 11, 2024 at 1:57 AM Abhishek Gupta 
 wrote:
 >
 > Unsubscribe
 >
 > On Fri, 10 May 2024 at 10:26 PM, Aman Sinha 
 wrote:
 >>
 >> +1 (non-binding)
 >>
 >> On Fri, May 10, 2024 at 7:57 AM Mahesh Raju Somalaraju <
 maheshra...@cloudera.com.invalid> wrote:
 >>>
 >>> +1(non-binding)
 >>>
 >>> Thanks
 >>> Mahesh Raju S
 >>>
 >>> On Fri, 10 May 2024, 06:15 Ayush Saxena, 
 wrote:
 
  Hi All,
  Following the discussion at [1]. Starting the official vote thread
 to
  mark Hive 2.x release line as EOL.
 
  Marking a release lines as EOL means there won't be any further
  release made for that release line
 
  I will start with my +1
 
  -Ayush
 
 
  [1]
 https://lists.apache.org/thread/91wk3oy1qo953md7941ojg2q97ofsl2d

>>>


[linux-yocto] [kernel-cache][PATCH] features/intel-npu: introduce Intel NPU fragment

2024-05-15 Thread Naveen Saini
Add config fragment for the  system with an 14th generation
Intel CPU (Meteor Lake) or newer. It will allow users to
enable Intel NPU (formerly called Intel VPU)
which is a CPU-integrated inference accelerator for
Computer Vision and Deep Learning applications.

Signed-off-by: Naveen Saini 
---
 features/intel-npu/intel-npu.cfg | 3 +++
 features/intel-npu/intel-npu.scc | 4 
 kern-features.rc | 1 +
 3 files changed, 8 insertions(+)
 create mode 100644 features/intel-npu/intel-npu.cfg
 create mode 100644 features/intel-npu/intel-npu.scc

diff --git a/features/intel-npu/intel-npu.cfg b/features/intel-npu/intel-npu.cfg
new file mode 100644
index ..6b7ced30
--- /dev/null
+++ b/features/intel-npu/intel-npu.cfg
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: MIT
+CONFIG_DRM_ACCEL=y
+CONFIG_DRM_ACCEL_IVPU=m
diff --git a/features/intel-npu/intel-npu.scc b/features/intel-npu/intel-npu.scc
new file mode 100644
index ..782c8499
--- /dev/null
+++ b/features/intel-npu/intel-npu.scc
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: MIT
+define KFEATURE_DESCRIPTION "Enable Intel NPU for Computer Vision and Deep 
Learning applications"
+
+kconf hardware intel-npu.cfg
diff --git a/kern-features.rc b/kern-features.rc
index 0e83053c..14381cc8 100644
--- a/kern-features.rc
+++ b/kern-features.rc
@@ -72,6 +72,7 @@
config = features/lxc/lxc-enable.scc
config = features/inline/inline.scc
config = features/intel-tco/intel-tco.scc
+   config = features/intel-npu/intel-npu.scc
config = features/ftrace/ftrace-function-tracer-disable.scc
config = features/ftrace/ftrace.scc
config = features/vxlan/vxlan-enable.scc
-- 
2.37.3


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13937): 
https://lists.yoctoproject.org/g/linux-yocto/message/13937
Mute This Topic: https://lists.yoctoproject.org/mt/106109710/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[OE-core] [PATCH] gstreamer1.0-plugins-bad: rename onevpl-intel-gpu -> vpl-gpu-rt

2024-05-15 Thread Naveen Saini
Upstream has been renamed to vpl-gpu-rt.

Signed-off-by: Naveen Saini 
---
 .../gstreamer/gstreamer1.0-plugins-bad_1.22.11.bb   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.22.11.bb 
b/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.22.11.bb
index 523ee7a5ae..49a3a79b2d 100644
--- a/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.22.11.bb
+++ b/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.22.11.bb
@@ -60,7 +60,7 @@ PACKAGECONFIG[libde265]= 
"-Dlibde265=enabled,-Dlibde265=disabled,libde26
 PACKAGECONFIG[libssh2] = 
"-Dcurl-ssh2=enabled,-Dcurl-ssh2=disabled,libssh2"
 PACKAGECONFIG[lcms2]   = 
"-Dcolormanagement=enabled,-Dcolormanagement=disabled,lcms"
 PACKAGECONFIG[modplug] = 
"-Dmodplug=enabled,-Dmodplug=disabled,libmodplug"
-PACKAGECONFIG[msdk]= "-Dmsdk=enabled 
-Dmfx_api=oneVPL,-Dmsdk=disabled,onevpl-intel-gpu"
+PACKAGECONFIG[msdk]= "-Dmsdk=enabled 
-Dmfx_api=oneVPL,-Dmsdk=disabled,vpl-gpu-rt"
 PACKAGECONFIG[neon]= "-Dneon=enabled,-Dneon=disabled,neon"
 PACKAGECONFIG[openal]  = 
"-Dopenal=enabled,-Dopenal=disabled,openal-soft"
 PACKAGECONFIG[opencv]  = "-Dopencv=enabled,-Dopencv=disabled,opencv"
-- 
2.37.3


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#199271): 
https://lists.openembedded.org/g/openembedded-core/message/199271
Mute This Topic: https://lists.openembedded.org/mt/106109503/21656
Group Owner: openembedded-core+ow...@lists.openembedded.org
Unsubscribe: https://lists.openembedded.org/g/openembedded-core/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [PATCH v2] arch/powerpc: Remove unused cede related functions

2024-05-14 Thread Naveen N Rao
On Tue, May 14, 2024 at 06:54:55PM GMT, Gautam Menghani wrote:
> Remove extended_cede_processor() and its helpers as
> extended_cede_processor() has no callers since
> commit 48f6e7f6d948("powerpc/pseries: remove cede offline state for CPUs")
> 
> Signed-off-by: Gautam Menghani 
> ---
> v1 -> v2:
> 1. Remove helpers of extended_cede_processor()

Acked-by: Naveen N Rao 

> 
>  arch/powerpc/include/asm/plpar_wrappers.h | 28 ---
>  1 file changed, 28 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/plpar_wrappers.h 
> b/arch/powerpc/include/asm/plpar_wrappers.h
> index b3ee44a40c2f..71648c126970 100644
> --- a/arch/powerpc/include/asm/plpar_wrappers.h
> +++ b/arch/powerpc/include/asm/plpar_wrappers.h
> @@ -18,16 +18,6 @@ static inline long poll_pending(void)
>   return plpar_hcall_norets(H_POLL_PENDING);
>  }
>  
> -static inline u8 get_cede_latency_hint(void)
> -{
> - return get_lppaca()->cede_latency_hint;
> -}
> -
> -static inline void set_cede_latency_hint(u8 latency_hint)
> -{
> - get_lppaca()->cede_latency_hint = latency_hint;
> -}
> -
>  static inline long cede_processor(void)
>  {
>   /*
> @@ -37,24 +27,6 @@ static inline long cede_processor(void)
>   return plpar_hcall_norets_notrace(H_CEDE);
>  }
>  
> -static inline long extended_cede_processor(unsigned long latency_hint)
> -{
> - long rc;
> - u8 old_latency_hint = get_cede_latency_hint();
> -
> - set_cede_latency_hint(latency_hint);
> -
> - rc = cede_processor();
> -
> - /* Ensure that H_CEDE returns with IRQs on */
> - if (WARN_ON(IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG) && !(mfmsr() & 
> MSR_EE)))
> - __hard_irq_enable();
> -
> - set_cede_latency_hint(old_latency_hint);
> -
> - return rc;
> -}
> -
>  static inline long vpa_call(unsigned long flags, unsigned long cpu,
>   unsigned long vpa)
>  {
> -- 
> 2.45.0
> 


Re: [PATCH] arch/powerpc: Remove the definition of unused cede function

2024-05-14 Thread Naveen N Rao
On Tue, May 14, 2024 at 03:35:03PM GMT, Gautam Menghani wrote:
> Remove extended_cede_processor() definition as it has no callers since
> commit 48f6e7f6d948("powerpc/pseries: remove cede offline state for CPUs")

extended_cede_processor() was added in commit 69ddb57cbea0 
("powerpc/pseries: Add extended_cede_processor() helper function."), 
which also added [get|set]_cede_latency_hint(). Those can also be 
removed if extended_cede_processor() is no longer needed.

- Naveen

> 
> Signed-off-by: Gautam Menghani 
> ---
>  arch/powerpc/include/asm/plpar_wrappers.h | 18 --
>  1 file changed, 18 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/plpar_wrappers.h 
> b/arch/powerpc/include/asm/plpar_wrappers.h
> index b3ee44a40c2f..6431fa1e1cb1 100644
> --- a/arch/powerpc/include/asm/plpar_wrappers.h
> +++ b/arch/powerpc/include/asm/plpar_wrappers.h
> @@ -37,24 +37,6 @@ static inline long cede_processor(void)
>   return plpar_hcall_norets_notrace(H_CEDE);
>  }
>  
> -static inline long extended_cede_processor(unsigned long latency_hint)
> -{
> - long rc;
> - u8 old_latency_hint = get_cede_latency_hint();
> -
> - set_cede_latency_hint(latency_hint);
> -
> - rc = cede_processor();
> -
> - /* Ensure that H_CEDE returns with IRQs on */
> - if (WARN_ON(IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG) && !(mfmsr() & 
> MSR_EE)))
> - __hard_irq_enable();
> -
> - set_cede_latency_hint(old_latency_hint);
> -
> - return rc;
> -}
> -
>  static inline long vpa_call(unsigned long flags, unsigned long cpu,
>   unsigned long vpa)
>  {
> -- 
> 2.45.0
> 


Re: [PATCH v3 3/5] powerpc/64: Convert patch_instruction() to patch_u32()

2024-05-14 Thread Naveen N Rao
On Tue, May 14, 2024 at 04:39:30AM GMT, Christophe Leroy wrote:
> 
> 
> Le 14/05/2024 à 04:59, Benjamin Gray a écrit :
> > On Tue, 2024-04-23 at 15:09 +0530, Naveen N Rao wrote:
> >> On Mon, Mar 25, 2024 at 04:53:00PM +1100, Benjamin Gray wrote:
> >>> This use of patch_instruction() is working on 32 bit data, and can
> >>> fail
> >>> if the data looks like a prefixed instruction and the extra write
> >>> crosses a page boundary. Use patch_u32() to fix the write size.
> >>>
> >>> Fixes: 8734b41b3efe ("powerpc/module_64: Fix livepatching for RO
> >>> modules")
> >>> Link: https://lore.kernel.org/all/20230203004649.1f59dbd4@yea/
> >>> Signed-off-by: Benjamin Gray 
> >>>
> >>> ---
> >>>
> >>> v2: * Added the fixes tag, it seems appropriate even if the subject
> >>> does
> >>>    mention a more robust solution being required.
> >>>
> >>> patch_u64() should be more efficient, but judging from the bug
> >>> report
> >>> it doesn't seem like the data is doubleword aligned.
> >>
> >> Asking again, is that still the case? It looks like at least the
> >> first
> >> fix below can be converted to patch_u64().
> >>
> >> - Naveen
> > 
> > Sorry, I think I forgot this question last time. Reading the commit
> > descriptions you linked, I don't see any mention of "entry->funcdata
> > will always be doubleword aligned because XYZ". If the patch makes it
> > doubleword aligned anyway, I wouldn't be confident asserting all
> > callers will always do this without looking into it a lot more.

No worries. I was asking primarily to check if you had noticed a 
specific issue with alignment.

As Christophe mentions, the structure is aligned. It is primarily 
allotted in a separate stubs section for modules. Looking at it closer 
though, I wonder if we need the below:

diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
index cccb1f78e058..0226d73a0007 100644
--- a/arch/powerpc/kernel/module_64.c
+++ b/arch/powerpc/kernel/module_64.c
@@ -428,8 +428,11 @@ int module_frob_arch_sections(Elf64_Ehdr *hdr,
 
/* Find .toc and .stubs sections, symtab and strtab */
for (i = 1; i < hdr->e_shnum; i++) {
-   if (strcmp(secstrings + sechdrs[i].sh_name, ".stubs") == 0)
+   if (strcmp(secstrings + sechdrs[i].sh_name, ".stubs") == 0) {
me->arch.stubs_section = i;
+   if (sechdrs[i].sh_addralign < 8)
+   sechdrs[i].sh_addralign = 8;
+   }
 #ifdef CONFIG_PPC_KERNEL_PCREL
else if (strcmp(secstrings + sechdrs[i].sh_name, 
".data..percpu") == 0)
me->arch.pcpu_section = i;

> > 
> > Perhaps a separate series could optimise it with appropriate
> > justification/assertions to catch bad alignment. But I think leaving it
> > out of this series is fine because the original works in words, so it's
> > not regressing anything.

That should be fine.

> 
> As far as I can see, the struct is 64 bits aligned by definition so 
> funcdata field is aligned too as there are just 8x u32 before it:
> 
> struct ppc64_stub_entry {
>   /*
>* 28 byte jump instruction sequence (7 instructions) that can
>* hold ppc64_stub_insns or stub_insns. Must be 8-byte aligned
>* with PCREL kernels that use prefix instructions in the stub.
>*/
>   u32 jump[7];
>   /* Used by ftrace to identify stubs */
>   u32 magic;
>   /* Data for the above code */
>   func_desc_t funcdata;
> } __aligned(8);
> 

Thanks,
Naveen



Re: [linux-yocto] [kernel-cache][master][yocto-6.6][PATCH] bsp/intel-corei7-64: enable Intel IOMMU support

2024-05-13 Thread Naveen Saini
Thanks Bruce. I will fix necessary configuration.

Regards,
Naveen

> -Original Message-
> From: Bruce Ashfield 
> Sent: Tuesday, May 14, 2024 10:14 AM
> To: Saini, Naveen Kumar 
> Cc: linux-yocto@lists.yoctoproject.org
> Subject: Re: [linux-yocto] [kernel-cache][master][yocto-6.6][PATCH]
> bsp/intel-corei7-64: enable Intel IOMMU support
> 
> both patches are merged.
> 
> I had to fixup the from field so I could push the patch, it is worth double
> checking your configuration (but gmail and groups.io have been mangling
> some patches regardless of configuration)
> 
> Bruce
> 
> 
> In message: [linux-yocto] [kernel-cache][master][yocto-6.6][PATCH]
> bsp/intel-corei7-64: enable Intel IOMMU support on 09/05/2024 Naveen
> Saini via lists.yoctoproject.org wrote:
> 
> > Enable Intel IOMMU driver for intel-corei7-64 machine.
> >
> > Signed-off-by: Naveen Saini 
> > ---
> >  bsp/intel-common/intel-corei7-64.scc | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/bsp/intel-common/intel-corei7-64.scc
> > b/bsp/intel-common/intel-corei7-64.scc
> > index ad9122c1..840c739f 100644
> > --- a/bsp/intel-common/intel-corei7-64.scc
> > +++ b/bsp/intel-common/intel-corei7-64.scc
> > @@ -25,6 +25,8 @@ include features/x2apic/x2apic.scc  #
> > CONFIG_INTEL_SPEED_SELECT_INTERFACE is 64-bit only  include
> > features/intel-sst/intel-sst.scc
> >
> > +include features/iommu/iommu.scc
> > +
> >  # This line comes last as it has the final word on  # CONFIG values.
> >  kconf hardware intel-corei7-64.cfg
> > --
> > 2.37.3
> >
> 
> >
> > 
> >


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13935): 
https://lists.yoctoproject.org/g/linux-yocto/message/13935
Mute This Topic: https://lists.yoctoproject.org/mt/105996440/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[jira] [Resolved] (HIVE-28255) Upgrade JLine to 3.x as 2.x is EOL.

2024-05-13 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam resolved HIVE-28255.
--
Fix Version/s: Not Applicable
   Resolution: Duplicate

> Upgrade JLine to 3.x as 2.x is EOL.
> ---
>
> Key: HIVE-28255
> URL: https://issues.apache.org/jira/browse/HIVE-28255
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>    Reporter: Naveen Gangam
>Priority: Major
> Fix For: Not Applicable
>
>
> Hive's Beeline uses JLine 2.14.x release which is EOL. We need to move to the 
> latest JLine version or atleast 3.25.1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28255) Upgrade JLine to 3.x as 2.x is EOL.

2024-05-13 Thread Naveen Gangam (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845980#comment-17845980
 ] 

Naveen Gangam commented on HIVE-28255:
--

Yes, Thank you. Closing this as dup. Looks like the other one has PR that has 
gone stale as well.

> Upgrade JLine to 3.x as 2.x is EOL.
> ---
>
> Key: HIVE-28255
> URL: https://issues.apache.org/jira/browse/HIVE-28255
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>    Reporter: Naveen Gangam
>Priority: Major
>
> Hive's Beeline uses JLine 2.14.x release which is EOL. We need to move to the 
> latest JLine version or atleast 3.25.1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PATCH bpf v3] powerpc/bpf: enforce full ordering for ATOMIC operations with BPF_FETCH

2024-05-13 Thread Naveen N Rao
On Mon, May 13, 2024 at 10:02:48AM GMT, Puranjay Mohan wrote:
> The Linux Kernel Memory Model [1][2] requires RMW operations that have a
> return value to be fully ordered.
> 
> BPF atomic operations with BPF_FETCH (including BPF_XCHG and
> BPF_CMPXCHG) return a value back so they need to be JITed to fully
> ordered operations. POWERPC currently emits relaxed operations for
> these.
> 
> We can show this by running the following litmus-test:
> 
> PPC SB+atomic_add+fetch
> 
> {
> 0:r0=x;  (* dst reg assuming offset is 0 *)
> 0:r1=2;  (* src reg *)
> 0:r2=1;
> 0:r4=y;  (* P0 writes to this, P1 reads this *)
> 0:r5=z;  (* P1 writes to this, P0 reads this *)
> 0:r6=0;
> 
> 1:r2=1;
> 1:r4=y;
> 1:r5=z;
> }
> 
> P0  | P1;
> stw r2, 0(r4)   | stw  r2,0(r5) ;
> |   ;
> loop:lwarx  r3, r6, r0  |   ;
> mr  r8, r3  |   ;
> add r3, r3, r1  | sync  ;
> stwcx.  r3, r6, r0  |   ;
> bne loop|   ;
> mr  r1, r8  |   ;
> |   ;
> lwa r7, 0(r5)   | lwa  r7,0(r4) ;
> 
> ~exists(0:r7=0 /\ 1:r7=0)
> 
> Witnesses
> Positive: 9 Negative: 3
> Condition ~exists (0:r7=0 /\ 1:r7=0)
> Observation SB+atomic_add+fetch Sometimes 3 9
> 
> This test shows that the older store in P0 is reordered with a newer
> load to a different address. Although there is a RMW operation with
> fetch between them. Adding a sync before and after RMW fixes the issue:
> 
> Witnesses
> Positive: 9 Negative: 0
> Condition ~exists (0:r7=0 /\ 1:r7=0)
> Observation SB+atomic_add+fetch Never 0 9
> 
> [1] https://www.kernel.org/doc/Documentation/memory-barriers.txt
> [2] https://www.kernel.org/doc/Documentation/atomic_t.txt
> 
> Fixes: 65112709115f ("powerpc/bpf/64: add support for BPF_ATOMIC bitwise 
> operations")

As I noted in v2, I think that is the wrong commit. This fixes the below 
four commits in mainline:
Fixes: aea7ef8a82c0 ("powerpc/bpf/32: add support for BPF_ATOMIC bitwise 
operations")
Fixes: 2d9206b22743 ("powerpc/bpf/32: Add instructions for atomic_[cmp]xchg")
Fixes: dbe6e2456fb0 ("powerpc/bpf/64: add support for atomic fetch operations")
Fixes: 1e82dfaa7819 ("powerpc/bpf/64: Add instructions for atomic_[cmp]xchg")

> Signed-off-by: Puranjay Mohan 
> Acked-by: Paul E. McKenney 

Cc: sta...@vger.kernel.org # v6.0+

I have tested this with test_bpf and test_progs.
Reviewed-by: Naveen N Rao 


- Naveen



[jira] [Created] (HIVE-28255) Upgrade JLine to 3.x as 2.x is EOL.

2024-05-13 Thread Naveen Gangam (Jira)
Naveen Gangam created HIVE-28255:


 Summary: Upgrade JLine to 3.x as 2.x is EOL.
 Key: HIVE-28255
 URL: https://issues.apache.org/jira/browse/HIVE-28255
 Project: Hive
  Issue Type: Improvement
Affects Versions: 4.0.0
Reporter: Naveen Gangam


Hive's Beeline uses JLine 2.14.x release which is EOL. We need to move to the 
latest JLine version or atleast 3.25.1.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[meta-intel] [PATCH] linux-intel/6.x: enable Intel IOMMU driver

2024-05-12 Thread Naveen Saini
Enable support for Intel IOMMU using DMA Remapping (DMAR) Devices.

Signed-off-by: Naveen Saini 
---
 recipes-kernel/linux/linux-intel_6.6.bb | 4 +++-
 recipes-kernel/linux/linux-intel_6.8.bb | 4 +++-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/recipes-kernel/linux/linux-intel_6.6.bb 
b/recipes-kernel/linux/linux-intel_6.6.bb
index 3b917bfa..6c7aab17 100644
--- a/recipes-kernel/linux/linux-intel_6.6.bb
+++ b/recipes-kernel/linux/linux-intel_6.6.bb
@@ -16,6 +16,8 @@ SRCREV_machine ?= "lts-v6.6.25-linux-240415T215440Z"
 SRCREV_meta ?= "c3d1322fb6ff68cdcf4d7a3c1140d81bfdc1320a"
 
 # Functionality flags
-KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc 
features/security/security.scc"
+KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc \
+features/security/security.scc \
+features/iommu/iommu.scc"
 
 UPSTREAM_CHECK_GITTAGREGEX = "^lts-(?Pv6.6.(\d+)-linux-(\d+)T(\d+)Z)$"
diff --git a/recipes-kernel/linux/linux-intel_6.8.bb 
b/recipes-kernel/linux/linux-intel_6.8.bb
index 036879db..f2212250 100644
--- a/recipes-kernel/linux/linux-intel_6.8.bb
+++ b/recipes-kernel/linux/linux-intel_6.8.bb
@@ -15,6 +15,8 @@ SRCREV_machine ?= "efbae83db36adb946d4f7bbdfda174107cd2"
 SRCREV_meta ?= "27907f391a4fc508da21358b13419c6e86926c34"
 
 # Functionality flags
-KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc 
features/security/security.scc"
+KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc \
+features/security/security.scc \
+features/iommu/iommu.scc"
 
 UPSTREAM_CHECK_GITTAGREGEX = 
"^mainline-tracking-v6.7-rc3-linux-(?P(\d+)T(\d+)Z)$"
-- 
2.37.3


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8317): 
https://lists.yoctoproject.org/g/meta-intel/message/8317
Mute This Topic: https://lists.yoctoproject.org/mt/106066848/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [kernel-cache][master][yocto-6.6][PATCH] bsp/intel-corei7-64: enable Intel IOMMU support

2024-05-08 Thread Naveen Saini
Enable Intel IOMMU driver for intel-corei7-64 machine.

Signed-off-by: Naveen Saini 
---
 bsp/intel-common/intel-corei7-64.scc | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/bsp/intel-common/intel-corei7-64.scc 
b/bsp/intel-common/intel-corei7-64.scc
index ad9122c1..840c739f 100644
--- a/bsp/intel-common/intel-corei7-64.scc
+++ b/bsp/intel-common/intel-corei7-64.scc
@@ -25,6 +25,8 @@ include features/x2apic/x2apic.scc
 # CONFIG_INTEL_SPEED_SELECT_INTERFACE is 64-bit only
 include features/intel-sst/intel-sst.scc
 
+include features/iommu/iommu.scc
+
 # This line comes last as it has the final word on
 # CONFIG values.
 kconf hardware intel-corei7-64.cfg
-- 
2.37.3


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13926): 
https://lists.yoctoproject.org/g/linux-yocto/message/13926
Mute This Topic: https://lists.yoctoproject.org/mt/105996440/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[linux-yocto] [kernel-cache][master][yocto-6.6][PATCH] features/intel-pinctrl: add pinctrl driver for Intel Meteor Lake

2024-05-08 Thread Naveen Saini
Signed-off-by: Naveen Saini 
---
 features/intel-pinctrl/intel-pinctrl.cfg | 1 +
 1 file changed, 1 insertion(+)

diff --git a/features/intel-pinctrl/intel-pinctrl.cfg 
b/features/intel-pinctrl/intel-pinctrl.cfg
index ca928504..28abf222 100644
--- a/features/intel-pinctrl/intel-pinctrl.cfg
+++ b/features/intel-pinctrl/intel-pinctrl.cfg
@@ -15,3 +15,4 @@ CONFIG_PINCTRL_LEWISBURG=y
 CONFIG_PINCTRL_LYNXPOINT=m
 CONFIG_PINCTRL_TIGERLAKE=y
 CONFIG_PINCTRL_ELKHARTLAKE=y
+CONFIG_PINCTRL_METEORLAKE=y
-- 
2.37.3


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#13925): 
https://lists.yoctoproject.org/g/linux-yocto/message/13925
Mute This Topic: https://lists.yoctoproject.org/mt/105996436/21656
Group Owner: linux-yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/linux-yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [PATCH bpf v2] powerpc/bpf: enforce full ordering for ATOMIC operations with BPF_FETCH

2024-05-08 Thread Naveen N Rao
  EMIT(PPC_RAW_MR(ret_reg, ax_reg));
>   if (!fp->aux->verifier_zext)
>   EMIT(PPC_RAW_LI(ret_reg - 1, 0)); /* 
> higher 32-bit */
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c 
> b/arch/powerpc/net/bpf_jit_comp64.c
> index 79f23974a320..9a077f8acf7b 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -804,6 +804,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, 
> u32 *fimage, struct code
>   /* Get offset into TMP_REG_1 */
>   EMIT(PPC_RAW_LI(tmp1_reg, off));
>   tmp_idx = ctx->idx * 4;
> + /*
> +  * Enforce full ordering for operations with BPF_FETCH 
> by emitting a 'sync'
> +  * before and after the operation.
> +  *
> +  * This is a requirement in the Linux Kernel Memory 
> Model.
> +  * See __cmpxchg_u64() in asm/cmpxchg.h as an example.
> +  */
> + if (imm & BPF_FETCH && IS_ENABLED(CONFIG_SMP))
> + EMIT(PPC_RAW_SYNC());

Same here.

I'll try and give this a test tomorrow.


- Naveen



Re: [PATCH v4 2/2] powerpc/bpf: enable kfunc call

2024-05-07 Thread Naveen N Rao
On Thu, May 02, 2024 at 11:02:05PM GMT, Hari Bathini wrote:
> Currently, bpf jit code on powerpc assumes all the bpf functions and
> helpers to be part of core kernel text. This is false for kfunc case,
> as function addresses may not be part of core kernel text area. So,
> add support for addresses that are not within core kernel text area
> too, to enable kfunc support. Emit instructions based on whether the
> function address is within core kernel text address or not, to retain
> optimized instruction sequence where possible.
> 
> In case of PCREL, as a bpf function that is not within core kernel
> text area is likely to go out of range with relative addressing on
> kernel base, use PC relative addressing. If that goes out of range,
> load the full address with PPC_LI64().
> 
> With addresses that are not within core kernel text area supported,
> override bpf_jit_supports_kfunc_call() to enable kfunc support. Also,
> override bpf_jit_supports_far_kfunc_call() to enable 64-bit pointers,
> as an address offset can be more than 32-bit long on PPC64.
> 
> Signed-off-by: Hari Bathini 
> ---
> 
> * Changes in v4:
>   - Use either kernelbase or PC for relative addressing. Also, fallback
> to PPC_LI64(), if both are out of range.
>   - Update r2 with kernel TOC for elfv1 too as elfv1 also uses the
> optimization sequence, that expects r2 to be kernel TOC, when
> function address is within core kernel text.
> 
> * Changes in v3:
>   - Retained optimized instruction sequence when function address is
> a core kernel address as suggested by Naveen.
>   - Used unoptimized instruction sequence for PCREL addressing to
> avoid out of range errors for core kernel function addresses.
>   - Folded patch that adds support for kfunc calls with patch that
> enables/advertises this support as suggested by Naveen.
> 
> 
>  arch/powerpc/net/bpf_jit_comp.c   | 10 +
>  arch/powerpc/net/bpf_jit_comp64.c | 61 ++-
>  2 files changed, 61 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 0f9a21783329..984655419da5 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -359,3 +359,13 @@ void bpf_jit_free(struct bpf_prog *fp)
>  
>   bpf_prog_unlock_free(fp);
>  }
> +
> +bool bpf_jit_supports_kfunc_call(void)
> +{
> + return true;
> +}
> +
> +bool bpf_jit_supports_far_kfunc_call(void)
> +{
> + return IS_ENABLED(CONFIG_PPC64);
> +}
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c 
> b/arch/powerpc/net/bpf_jit_comp64.c
> index 4de08e35e284..8afc14a4a125 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -208,17 +208,13 @@ bpf_jit_emit_func_call_hlp(u32 *image, u32 *fimage, 
> struct codegen_context *ctx,
>   unsigned long func_addr = func ? ppc_function_entry((void *)func) : 0;
>   long reladdr;
>  
> - if (WARN_ON_ONCE(!core_kernel_text(func_addr)))
> + if (WARN_ON_ONCE(!kernel_text_address(func_addr)))
>   return -EINVAL;
>  
> - if (IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) {
> - reladdr = func_addr - local_paca->kernelbase;
> +#ifdef CONFIG_PPC_KERNEL_PCREL

Would be good to retain use of IS_ENABLED().
Reviewed-by: Naveen N Rao 


- Naveen



Re: [PATCH v4 1/2] powerpc64/bpf: fix tail calls for PCREL addressing

2024-05-07 Thread Naveen N Rao
On Thu, May 02, 2024 at 11:02:04PM GMT, Hari Bathini wrote:
> With PCREL addressing, there is no kernel TOC. So, it is not setup in
> prologue when PCREL addressing is used. But the number of instructions
> to skip on a tail call was not adjusted accordingly. That resulted in
> not so obvious failures while using tailcalls. 'tailcalls' selftest
> crashed the system with the below call trace:
> 
>   bpf_test_run+0xe8/0x3cc (unreliable)
>   bpf_prog_test_run_skb+0x348/0x778
>   __sys_bpf+0xb04/0x2b00
>   sys_bpf+0x28/0x38
>   system_call_exception+0x168/0x340
>   system_call_vectored_common+0x15c/0x2ec
> 
> Also, as bpf programs are always module addresses and a bpf helper in
> general is a core kernel text address, using PC relative addressing
> often fails with "out of range of pcrel address" error. Switch to
> using kernel base for relative addressing to handle this better.
> 
> Fixes: 7e3a68be42e1 ("powerpc/64: vmlinux support building with PCREL 
> addresing")
> Cc: sta...@vger.kernel.org
> Signed-off-by: Hari Bathini 
> ---
> 
> * Changes in v4:
>   - Fix out of range errors by switching to kernelbase instead of PC
> for relative addressing.
> 
> * Changes in v3:
>   - New patch to fix tailcall issues with PCREL addressing.
> 
> 
>  arch/powerpc/net/bpf_jit_comp64.c | 30 --
>  1 file changed, 16 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c 
> b/arch/powerpc/net/bpf_jit_comp64.c
> index 79f23974a320..4de08e35e284 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -202,7 +202,8 @@ void bpf_jit_build_epilogue(u32 *image, struct 
> codegen_context *ctx)
>   EMIT(PPC_RAW_BLR());
>  }
>  
> -static int bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context 
> *ctx, u64 func)
> +static int
> +bpf_jit_emit_func_call_hlp(u32 *image, u32 *fimage, struct codegen_context 
> *ctx, u64 func)
>  {
>   unsigned long func_addr = func ? ppc_function_entry((void *)func) : 0;
>   long reladdr;
> @@ -211,19 +212,20 @@ static int bpf_jit_emit_func_call_hlp(u32 *image, 
> struct codegen_context *ctx, u
>   return -EINVAL;
>  
>   if (IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) {
> - reladdr = func_addr - CTX_NIA(ctx);
> + reladdr = func_addr - local_paca->kernelbase;
>  
>   if (reladdr >= (long)SZ_8G || reladdr < -(long)SZ_8G) {
> - pr_err("eBPF: address of %ps out of range of pcrel 
> address.\n",
> - (void *)func);
> + pr_err("eBPF: address of %ps out of range of 34-bit 
> relative address.\n",
> +(void *)func);
>   return -ERANGE;
>   }
> - /* pla r12,addr */
> - EMIT(PPC_PREFIX_MLS | __PPC_PRFX_R(1) | IMM_H18(reladdr));
> - EMIT(PPC_INST_PADDI | ___PPC_RT(_R12) | IMM_L(reladdr));
> - EMIT(PPC_RAW_MTCTR(_R12));
> - EMIT(PPC_RAW_BCTR());
> -
> + EMIT(PPC_RAW_LD(_R12, _R13, offsetof(struct paca_struct, 
> kernelbase)));
> + /* Align for subsequent prefix instruction */
> + if (!IS_ALIGNED((unsigned long)fimage + CTX_NIA(ctx), 8))
> + EMIT(PPC_RAW_NOP());

We don't need the prefix instruction to be aligned to a doubleword 
boundary - it just shouldn't cross a 64-byte boundary. Since we know the 
exact address of the instruction here, we should be able to check for 
that case.

> + /* paddi r12,r12,addr */
> + EMIT(PPC_PREFIX_MLS | __PPC_PRFX_R(0) | IMM_H18(reladdr));
> + EMIT(PPC_INST_PADDI | ___PPC_RT(_R12) | ___PPC_RA(_R12) | 
> IMM_L(reladdr));
>   } else {
>   reladdr = func_addr - kernel_toc_addr();
>   if (reladdr > 0x7FFF || reladdr < -(0x8000L)) {
> @@ -233,9 +235,9 @@ static int bpf_jit_emit_func_call_hlp(u32 *image, struct 
> codegen_context *ctx, u
>  
>   EMIT(PPC_RAW_ADDIS(_R12, _R2, PPC_HA(reladdr)));
>   EMIT(PPC_RAW_ADDI(_R12, _R12, PPC_LO(reladdr)));
> -     EMIT(PPC_RAW_MTCTR(_R12));
> - EMIT(PPC_RAW_BCTRL());
>   }
> + EMIT(PPC_RAW_MTCTR(_R12));
> + EMIT(PPC_RAW_BCTRL());

This change shouldn't be necessary since these instructions are moved 
back into the conditional in the next patch.

Other than those minor comments:
Reviewed-by: Naveen N Rao 


- Naveen



Slack Invitation

2024-05-07 Thread Naveen Kumar
Hi all,

Kindly send me the invite link to join ActiveMq Slack Channel


Unable to configure SSL and authentication

2024-05-07 Thread Naveen Kumar
Hi all,

I'm using Apache ActiveMQ Classic 5.16.7 version. My requirement is to
enable SSL and authentication for the JMX URL to prevent from RMI.

I have generated certificates using keystore and added in the activemq/conf
Later added below lines in activemq-admin.bat as mentioned in the document
<https://activemq.apache.org/components/classic/documentation/how-do-i-use-ssl>
but
still unable to enable SSL. Need help regarding this.

set ACTIVEMQ_SUNJMX_START=-Dcom.sun.management.jmxremote.port=1616
-Dcom.sun.management.jmxremote.authenticate=true
-Dcom.sun.management.jmxremote.ssl=true
set ACTIVEMQ_SSL_OPTS =
-Djavax.net.ssl.keyStore=%ACTIVEMQ_HOME%/conf/broker.ks
-Djavax.net.ssl.keyStorePassword=Admin@123
-Djavax.net.ssl.trustStore=%ACTIVEMQ_HOME%/conf/client.ts
-Djavax.net.ssl.trustStorePassword=Admin@123


Thanks,
Naveen.


Re: [PATCH v6] arch/powerpc/kvm: Add support for reading VPA counters for pseries guests

2024-05-07 Thread Naveen N Rao
gt; + for_each_possible_cpu(cpu) {
> + kvmhv_set_l2_counters_status(cpu, false);
> + }
> +}
> +
> +static void do_trace_nested_cs_time(struct kvm_vcpu *vcpu)
> +{
> + struct lppaca *lp = get_lppaca();
> + u64 l1_to_l2_ns, l2_to_l1_ns, l2_runtime_ns;
> +
> + l1_to_l2_ns = tb_to_ns(be64_to_cpu(lp->l1_to_l2_cs_tb));
> + l2_to_l1_ns = tb_to_ns(be64_to_cpu(lp->l2_to_l1_cs_tb));
> + l2_runtime_ns = tb_to_ns(be64_to_cpu(lp->l2_runtime_tb));
> + trace_kvmppc_vcpu_stats(vcpu, l1_to_l2_ns - local_paca->l1_to_l2_cs,
> +     l2_to_l1_ns - local_paca->l2_to_l1_cs,
> + l2_runtime_ns - 
> local_paca->l2_runtime_agg);

Depending on how the hypervisor works, if the vcpu was in l2 when the 
tracepoint is enabled, the counters may not be updated on exit and we 
may emit a trace with all values zero. If that is possible, it might be 
a good idea to only emit the trace if any of the counters are non-zero.

Otherwise, this looks good to me.
Acked-by: Naveen N Rao 


- Naveen

> + local_paca->l1_to_l2_cs = l1_to_l2_ns;
> + local_paca->l2_to_l1_cs = l2_to_l1_ns;
> + local_paca->l2_runtime_agg = l2_runtime_ns;
> +}
> +
>  static int kvmhv_vcpu_entry_nestedv2(struct kvm_vcpu *vcpu, u64 time_limit,
>unsigned long lpcr, u64 *tb)
>  {
> @@ -4156,6 +4204,10 @@ static int kvmhv_vcpu_entry_nestedv2(struct kvm_vcpu 
> *vcpu, u64 time_limit,
>  
>   timer_rearm_host_dec(*tb);
>  
> + /* Record context switch and guest_run_time data */
> + if (kvmhv_get_l2_counters_status())
> + do_trace_nested_cs_time(vcpu);
> +
>   return trap;
>  }
>  
> diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h
> index 8d57c8428531..dc118ab88f23 100644
> --- a/arch/powerpc/kvm/trace_hv.h
> +++ b/arch/powerpc/kvm/trace_hv.h
> @@ -238,6 +238,9 @@
>   {H_MULTI_THREADS_ACTIVE,"H_MULTI_THREADS_ACTIVE"}, \
>   {H_OUTSTANDING_COP_OPS, "H_OUTSTANDING_COP_OPS"}
>  
> +int kmvhv_counters_tracepoint_regfunc(void);
> +void kmvhv_counters_tracepoint_unregfunc(void);
> +
>  TRACE_EVENT(kvm_guest_enter,
>   TP_PROTO(struct kvm_vcpu *vcpu),
>   TP_ARGS(vcpu),
> @@ -512,6 +515,30 @@ TRACE_EVENT(kvmppc_run_vcpu_exit,
>   __entry->vcpu_id, __entry->exit, __entry->ret)
>  );
>  
> +TRACE_EVENT_FN(kvmppc_vcpu_stats,
> + TP_PROTO(struct kvm_vcpu *vcpu, u64 l1_to_l2_cs, u64 l2_to_l1_cs, u64 
> l2_runtime),
> +
> + TP_ARGS(vcpu, l1_to_l2_cs, l2_to_l1_cs, l2_runtime),
> +
> + TP_STRUCT__entry(
> + __field(int,vcpu_id)
> + __field(u64,l1_to_l2_cs)
> + __field(u64,l2_to_l1_cs)
> + __field(u64,l2_runtime)
> + ),
> +
> + TP_fast_assign(
> + __entry->vcpu_id  = vcpu->vcpu_id;
> + __entry->l1_to_l2_cs = l1_to_l2_cs;
> + __entry->l2_to_l1_cs = l2_to_l1_cs;
> + __entry->l2_runtime = l2_runtime;
> + ),
> +
> + TP_printk("VCPU %d: l1_to_l2_cs_time=%llu ns l2_to_l1_cs_time=%llu ns 
> l2_runtime=%llu ns",
> + __entry->vcpu_id,  __entry->l1_to_l2_cs,
> + __entry->l2_to_l1_cs, __entry->l2_runtime),
> + kmvhv_counters_tracepoint_regfunc, kmvhv_counters_tracepoint_unregfunc
> +);
>  #endif /* _TRACE_KVM_HV_H */
>  
>  /* This part must be outside protection */
> -- 
> 2.44.0
> 


Re: [PATCH v5 RESEND] arch/powerpc/kvm: Add support for reading VPA counters for pseries guests

2024-04-25 Thread Naveen N Rao
On Wed, Apr 24, 2024 at 11:08:38AM +0530, Gautam Menghani wrote:
> On Mon, Apr 22, 2024 at 09:15:02PM +0530, Naveen N Rao wrote:
> > On Tue, Apr 02, 2024 at 12:36:54PM +0530, Gautam Menghani wrote:
> > >  static int kvmhv_vcpu_entry_nestedv2(struct kvm_vcpu *vcpu, u64 
> > >  time_limit,
> > >unsigned long lpcr, u64 *tb)
> > >  {
> > > @@ -4130,6 +4161,11 @@ static int kvmhv_vcpu_entry_nestedv2(struct 
> > > kvm_vcpu *vcpu, u64 time_limit,
> > >   kvmppc_gse_put_u64(io->vcpu_run_input, KVMPPC_GSID_LPCR, lpcr);
> > >  
> > >   accumulate_time(vcpu, >arch.in_guest);
> > > +
> > > + /* Enable the guest host context switch time tracking */
> > > + if (unlikely(trace_kvmppc_vcpu_exit_cs_time_enabled()))
> > > + kvmhv_set_l2_accumul(1);
> > > +
> > >   rc = plpar_guest_run_vcpu(0, vcpu->kvm->arch.lpid, vcpu->vcpu_id,
> > > , );
> > >  
> > > @@ -4156,6 +4192,10 @@ static int kvmhv_vcpu_entry_nestedv2(struct 
> > > kvm_vcpu *vcpu, u64 time_limit,
> > >  
> > >   timer_rearm_host_dec(*tb);
> > >  
> > > + /* Record context switch and guest_run_time data */
> > > + if (kvmhv_get_l2_accumul())
> > > + do_trace_nested_cs_time(vcpu);
> > > +
> > >   return trap;
> > >  }
> > 
> > I'm assuming the counters in VPA are cumulative, since you are zero'ing 
> > them out on exit. If so, I think a better way to implement this is to 
> > use TRACE_EVENT_FN() and provide tracepoint registration and 
> > unregistration functions. You can then enable the counters once during 
> > registration and avoid repeated writes to the VPA area. With that, you 
> > also won't need to do anything before vcpu entry. If you maintain 
> > previous values, you can calculate the delta and emit the trace on vcpu 
> > exit. The values in VPA area can then serve as the cumulative values.
> > 
> 
> This approach will have a problem. The context switch times are reported
> in the L1 LPAR's CPU's VPA area. Consider the following scenario:
> 
> 1. L1 has 2 cpus, and L2 has 1 cpu
> 2. L2 runs on L1's cpu0 for a few seconds, and the counter values go to
> 1 million
> 3. We are maintaining a copy of values of VPA in separate variables, so
> those variables also have 1 million.
> 4. Now if L2's vcpu is migrated to another L1 cpu, that L1 cpu's VPA
> counters will start from 0, so if we try to get delta value, we will end
> up doing 0 - 1 million, which would be wrong.

I'm assuming you mean migrating the task. If we maintain the previous 
readings in paca, it should work I think.

> 
> The aggregation logic in this patch works as we zero out the VPA after
> every switch, and maintain aggregation in a vcpu->arch

Are the cumulative values of the VPA counters of no significance? We 
lose those with this approach. Not sure if we care.


- Naveen



Re: Temporary queue in Artemis active MQ

2024-04-24 Thread Naveen kumar
Hi Team,

Any update please ?

Regards ,
Naveen

> On 22 Apr 2024, at 5:25 PM, Naveen kumar  wrote:
> 
> Hi Team,
> 
> Any update on below please ?
> 
> Regards,
> Naveen
> 
>> On 16 Apr 2024, at 11:59 AM, Naveen kumar  wrote:
>> 
>> Hi Team,
>> 
>> We have the below question on temporary queues in Artemis MQ in eks . Could 
>> you please help us with answer for the below ones,
>> 
>> 1. When are temporary queues used ?
>> 2. How are temporary queues created ?
>> 3. Is there any API call to create temporary queue using JMX?
>> 4. How can granular access management for temporary queues are specified
>> 
>> 
>> Regards,
>> Naveen


Re: [ovs-dev] [PATCH OVN v5 0/4] DHCP Relay Agent support for overlay subnets.

2024-04-24 Thread Naveen Yerramneni


> On 05-Apr-2024, at 9:08 PM, Numan Siddique  wrote:
> 
> CAUTION: External Email
> 
> 
> On Wed, Mar 20, 2024 at 10:40 AM Naveen Yerramneni 
>  wrote:
> >
> > This patch contains changes to enable DHCP Relay Agent support for 
> > overlay subnets.
> >
> > USE CASE:
> > --
> >   - Enable IP address assignment for overlay subnets from the 
> > centralized DHCP server present in the underlay network.
> >
> > PREREQUISITES
> > --
> >   - Logical Router Port IP should be assigned (statically) from the 
> > same overlay subnet which is managed by DHCP server.
> >   - LRP IP is used for GIADRR field when relaying the DHCP packets and 
> > also same IP needs to be configured as default gateway for the overlay 
> > subnet.
> >   - Overlay subnets managed by external DHCP server are expected to be 
> > directly reachable from the underlay network.
> >
> > EXPECTED PACKET FLOW:
> > --
> > Following is the expected packet flow inorder to support DHCP rleay 
> > functionality in OVN.
> >   1. DHCP client originates DHCP discovery (broadcast).
> >   2. DHCP relay (running on the OVN) receives the broadcast and 
> > forwards the packet to the DHCP server by converting it to unicast.
> >  While forwarding the packet, it updates the GIADDR in DHCP header 
> > to its interface IP on which DHCP packet is received and increments hop 
> > count.
> >   3. DHCP server uses GIADDR field to decide the IP address pool from 
> > which IP has to be assigned and DHCP offer is sent to the same IP (GIADDR).
> >   4. DHCP relay agent forwards the offer to the client.
> >   5. DHCP client sends DHCP request (broadcast) packet.
> >   6. DHCP relay (running on the OVN) receives the broadcast and 
> > forwards the packet to the DHCP server by converting it to unicast.
> >  While forwarding the packet, it updates the GIADDR in DHCP header 
> > to its interface IP on which DHCP packet is received.
> >   7. DHCP Server sends the ACK packet.
> >   8. DHCP relay agent forwards the ACK packet to the client.
> >   9. All the future renew/release packets are directly exchanged 
> > between DHCP client and DHCP server.
> >
> > OVN DHCP RELAY PACKET FLOW:
> > 
> > To add DHCP Relay support on OVN, we need to replicate all the behavior 
> > described above using distributed logical switch and logical router.
> > At, highlevel packet flow is distributed among Logical Switch and 
> > Logical Router on source node (where VM is deployed) and redirect 
> > chassis(RC) node.
> >   1. Request packet gets processed on the source node where VM is 
> > deployed and relays the packet to DHCP server.
> >   2. Response packet is first processed on RC node (which first 
> > recieves the packet from underlay network). RC node forwards the packet to 
> > the right node by filling in the dest MAC and IP.
> >
> > OVN Packet flow with DHCP relay is explained below.
> >   1. DHCP client (VM) sends the DHCP discover packet (broadcast).
> >   2. Logical switch converts the packet to L2 unicast by setting the 
> > destination MAC to LRP's MAC
> >   3. Logical Router receives the packet and redirects it to the OVN 
> > controller.
> >   4. OVN controller updates the required information(GIADDR, HOP count) 
> > in the DHCP payload after doing the required checks. If any check fails, 
> > packet is dropped.
> >   5. Logical Router converts the packet to L3 unicast and forwards it 
> > to the server. This packets gets routed like any other packet (via RC node).
> >   6. Server replies with DHCP offer.
> >   7. RC node processes the DHCP offer and forwards it to the OVN 
> > controller.
> >   8. OVN controller does sanity checks and  updates the destination MAC 
> > (available in DHCP header), destination IP (available in DHCP header) and 
> > reinjects the packet to datapath.
> >  If any check fails, packet is dropped.
> >   9. Logical router updates the source IP and port and forwards the 
> > packet to logical switch.
> >   10. Logical switch delivers the packet to the DHCP client.
> >   11. Similar steps are performed for Request and Ack packets.
> >   12. All the future renew/release packets are directly exchanged 
> > between DHCP client and DHCP server
> >
> > NEW OVN ACTIONS
> > ---
>

[ovs-dev] [PATCH OVN v6 3/3] northd, tests: DHCP Relay Agent support for overlay IPv4 subnets.

2024-04-24 Thread Naveen Yerramneni
NB SCHEMA CHANGES
-
  1. New DHCP_Relay table
  "DHCP_Relay": {
"columns": {
"name": {"type": "string"},
"servers": {"type": {"key": "string",
   "min": 0,
   "max": 1}},
"external_ids": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}},
"options": {"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"isRoot": true},
  2. New column to Logical_Router_Port table
  "dhcp_relay": {"type": {"key": {"type": "uuid",
"refTable": "DHCP_Relay",
"refType": "strong"},
"min": 0,
"max": 1}},

NEW PIPELINE STAGES
---
Following stage is added for DHCP relay feature.
Some of the flows are fitted into the existing pipeline tages.
  1. lr_in_dhcp_relay_req
   - This stage process the DHCP request packets coming from DHCP clients.
   - DHCP request packets for which dhcp_relay_req_chk action
 (which gets applied in ip input stage) is successful are forwarded to 
DHCP server.
   - DHCP request packets for which dhcp_relay_req_chk action is 
unsuccessful gets dropped.
  2. lr_in_dhcp_relay_resp_chk
   - This stage applied the dhcp_relay_resp_chk action for  DHCP response 
packets coming
 from the DHCP server.
  3. lr_in_dhcp_relay_resp
   - DHCP response packets for which dhcp_relay_resp_chk is sucessful are 
forwarded
 to the DHCP clients.
   - DHCP response packets for which dhcp_relay_resp_chk is unsucessful 
gets dropped.

REGISTRY USAGE
---
  - reg9[7] : To store the result of dhcp_relay_req_chk action.
  - reg9[8] : To store the result of dhcp_relay_resp_chk action.
  - reg2 : To store the original dest ip for DHCP response packets.
   This is required to properly match the packets in
   lr_in_dhcp_relay_resp stage since dhcp_relay_resp_chk action
   changes the dest ip.

FLOWS
-

Following are the flows added when DHCP Relay is configured on one overlay 
subnet,
one additonal flow is added in ls_in_l2_lkup table for each VM part of the 
subnet.

  1. table=27(ls_in_l2_lkup  ), priority=100  , match=(inport ==  
&& eth.src ==  && ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && 
udp.src == 68 && udp.dst == 67),
 action=(eth.dst=;outport=;next;/* DHCP_RELAY_REQ */)
  2. table=3 (lr_in_ip_input ), priority=110  , match=(inport ==  && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && ip.frag == 0 && udp.src == 
68 && udp.dst == 67),
 action=(reg9[7] = dhcp_relay_req_chk(, );next; /* 
DHCP_RELAY_REQ */)
  3. table=3 (lr_in_ip_input ), priority=110  , match=(ip4.src == 
 && ip4.dst ==  && udp.src == 67 && udp.dst == 67), 
action=(next;/* DHCP_RELAY_RESP */)
  4. table=4 (lr_in_dhcp_relay_req), priority=100  , match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[7]),
 action=(ip4.src=;ip4.dst=;udp.src=67;next; /* 
DHCP_RELAY_REQ */)
  5. table=4 (lr_in_dhcp_relay_req), priority=1, match=(inport ==  && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[7] == 0),
 action=(drop; /* DHCP_RELAY_REQ */)
  6. table=18(lr_in_dhcp_relay_resp_chk), priority=100  , match=(ip4.src == 
 && ip4.dst ==  && ip.frag == 0 && udp.src == 67 && udp.dst 
== 67),
 action=(reg2 = ip4.dst;reg9[8] = dhcp_relay_resp_chk(, 
);next;/* DHCP_RELAY_RESP */)
  7. table=19(lr_in_dhcp_relay_resp), priority=100  , match=(ip4.src == 
 && reg2 ==  && udp.src == 67 && udp.dst == 67 && reg9[8]),
 action=(ip4.src=;udp.dst=68;outport=;output; /* DHCP_RELAY_RESP 
*/)
  8. table=19(lr_in_dhcp_relay_resp), priority=1, match=(ip4.src == 
 && reg2 ==  && udp.src == 67 && udp.dst == 67 && reg9[8] 
== 0), action=(drop; /* DHCP_RELAY_RESP */)

Commands to enable the feature
--
  ovn-nbctl create DHCP_Relay name= servers=
  ovn-nbctl set Logical_Router_port  dhcp_relay=
  ovn-nbc

[ovs-dev] [PATCH OVN v6 2/3] controller: DHCP Relay Agent support for overlay IPv4 subnets.

2024-04-24 Thread Naveen Yerramneni
Added changes in pinctrl to process DHCP Relay opcodes:
  - ACTION_OPCODE_DHCP_RELAY_REQ_CHK: For request packets
  - ACTION_OPCODE_DHCP_RELAY_RESP_CHK: For response packet

Signed-off-by: Naveen Yerramneni 
---
 controller/pinctrl.c | 597 ++-
 lib/ovn-l7.h |   2 +
 2 files changed, 530 insertions(+), 69 deletions(-)

diff --git a/controller/pinctrl.c b/controller/pinctrl.c
index aa73facbf..50e090cd2 100644
--- a/controller/pinctrl.c
+++ b/controller/pinctrl.c
@@ -1993,6 +1993,515 @@ is_dhcp_flags_broadcast(ovs_be16 flags)
 return flags & htons(DHCP_BROADCAST_FLAG);
 }
 
+static const char *dhcp_msg_str[] = {
+[0] = "INVALID",
+[DHCP_MSG_DISCOVER] = "DISCOVER",
+[DHCP_MSG_OFFER] = "OFFER",
+[DHCP_MSG_REQUEST] = "REQUEST",
+[OVN_DHCP_MSG_DECLINE] = "DECLINE",
+[DHCP_MSG_ACK] = "ACK",
+[DHCP_MSG_NAK] = "NAK",
+[OVN_DHCP_MSG_RELEASE] = "RELEASE",
+[OVN_DHCP_MSG_INFORM] = "INFORM"
+};
+
+static bool
+dhcp_relay_is_msg_type_supported(uint8_t msg_type)
+{
+return (msg_type >= DHCP_MSG_DISCOVER && msg_type <= OVN_DHCP_MSG_RELEASE);
+}
+
+static const char *dhcp_msg_str_get(uint8_t msg_type)
+{
+if (!dhcp_relay_is_msg_type_supported(msg_type)) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "Unknown DHCP msg type: %u", msg_type);
+return "UNKNOWN";
+}
+return dhcp_msg_str[msg_type];
+}
+
+static const struct dhcp_header *
+dhcp_get_hdr_from_pkt(struct dp_packet *pkt_in, const char **in_dhcp_pptr,
+  const char *end)
+{
+/* Validate the DHCP request packet.
+ * Format of the DHCP packet is
+ * ---
+ *| UDP HEADER | DHCP HEADER | 4 Byte DHCP Cookie | DHCP OPTIONS(var len) |
+ * ---
+ */
+
+*in_dhcp_pptr = dp_packet_get_udp_payload(pkt_in);
+if (*in_dhcp_pptr == NULL) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Invalid or incomplete DHCP packet received");
+return NULL;
+}
+
+const struct dhcp_header *dhcp_hdr
+= (const struct dhcp_header *) *in_dhcp_pptr;
+(*in_dhcp_pptr) += sizeof *dhcp_hdr;
+if (*in_dhcp_pptr > end) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Invalid or incomplete DHCP packet received, "
+ "bad data length");
+return NULL;
+}
+
+if (dhcp_hdr->htype != 0x1) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Packet is recieved with "
+ "unsupported hardware type");
+return NULL;
+}
+
+if (dhcp_hdr->hlen != 0x6) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Packet is recieved with "
+ "unsupported hardware length");
+return NULL;
+}
+
+/* DHCP options follow the DHCP header. The first 4 bytes of the DHCP
+ * options is the DHCP magic cookie followed by the actual DHCP options.
+ */
+ovs_be32 magic_cookie = htonl(DHCP_MAGIC_COOKIE);
+if ((*in_dhcp_pptr) + sizeof magic_cookie > end ||
+get_unaligned_be32((const void *) (*in_dhcp_pptr)) != magic_cookie) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Magic cookie not present in the DHCP packet");
+return NULL;
+}
+
+(*in_dhcp_pptr) += sizeof magic_cookie;
+
+return dhcp_hdr;
+}
+
+static void
+dhcp_parse_options(const char **in_dhcp_pptr, const char *end,
+   const uint8_t **dhcp_msg_type_pptr,
+   ovs_be32 *request_ip_ptr,
+   bool *ipxe_req_ptr, ovs_be32 *server_id_ptr,
+   ovs_be32 *netmask_ptr, ovs_be32 *router_ip_ptr)
+{
+while ((*in_dhcp_pptr) < end) {
+const struct dhcp_opt_header *in_dhcp_opt =
+(const struct dhcp_opt_header *) *in_dhcp_pptr;
+if (in_dhcp_opt->code == DHCP_OPT_END) {
+break;
+}
+if (in_dhcp_opt->code == DHCP_OPT_PAD) {
+(*in_dhcp_pptr) += 1;
+continue;
+}
+(*in_dhcp_pptr) += sizeof *in_dhcp_opt;
+if ((*in_dhcp_pptr) > end) {
+break;
+}
+(*in_dhcp_pptr) += in_dhcp_opt->len;
+if ((*in_dhcp_pptr) > end) {
+break;
+}
+
+switch (in_dhcp_opt->code) {
+case DHCP_OPT_MSG_TYPE:
+if (dhcp_msg_type_pptr && in_dhcp_opt->len == 1)

[ovs-dev] [PATCH OVN v6 1/3] actions: DHCP Relay Agent support for overlay IPv4 subnets.

2024-04-24 Thread Naveen Yerramneni
NEW OVN ACTIONS
---
  1. dhcp_relay_req_chk(, )
   - This action executes on the source node on which the DHCP request 
originated.
   - This action relays the DHCP request coming from client to the server.
 Relay-ip is used to update GIADDR in the DHCP header.
  2. dhcp_relay_resp_chk(, )
   - This action executes on the first node (RC node) which processes
 the DHCP response from the server.
   - This action updates  the destination MAC and destination IP so that 
the response
 can be forwarded to the appropriate node from which request was 
originated.
   - Relay-ip, server-ip are used to validate GIADDR and SERVER ID in the 
DHCP payload.

Signed-off-by: Naveen Yerramneni 
---
 include/ovn/actions.h |  27 ++
 lib/actions.c | 116 ++
 ovn-sb.xml|  62 ++
 tests/ovn.at  |  34 +
 utilities/ovn-trace.c |  67 
 5 files changed, 306 insertions(+)

diff --git a/include/ovn/actions.h b/include/ovn/actions.h
index f697dff39..ab2f3856c 100644
--- a/include/ovn/actions.h
+++ b/include/ovn/actions.h
@@ -96,6 +96,8 @@ struct collector_set_ids;
 OVNACT(LOOKUP_ND_IP,  ovnact_lookup_mac_bind_ip) \
 OVNACT(PUT_DHCPV4_OPTS,   ovnact_put_opts)\
 OVNACT(PUT_DHCPV6_OPTS,   ovnact_put_opts)\
+OVNACT(DHCPV4_RELAY_REQ_CHK,  ovnact_dhcp_relay)  \
+OVNACT(DHCPV4_RELAY_RESP_CHK, ovnact_dhcp_relay)  \
 OVNACT(SET_QUEUE, ovnact_set_queue)   \
 OVNACT(DNS_LOOKUP,ovnact_result)  \
 OVNACT(LOG,   ovnact_log) \
@@ -389,6 +391,15 @@ struct ovnact_put_opts {
 size_t n_options;
 };
 
+/* OVNACT_DHCP_RELAY. */
+struct ovnact_dhcp_relay {
+struct ovnact ovnact;
+int family;
+struct expr_field dst;  /* 1-bit destination field. */
+ovs_be32 relay_ipv4;
+ovs_be32 server_ipv4;
+};
+
 /* Valid arguments to SET_QUEUE action.
  *
  * QDISC_MIN_QUEUE_ID is the default queue, so user-defined queues should
@@ -765,6 +776,22 @@ enum action_opcode {
 
 /* multicast group split buffer action. */
 ACTION_OPCODE_MG_SPLIT_BUF,
+
+/* "dhcp_relay_req_chk(relay_ip, server_ip)".
+ *
+ * Arguments follow the action_header, in this format:
+ *   - The 32-bit DHCP relay IP.
+ *   - The 32-bit DHCP server IP.
+ */
+ACTION_OPCODE_DHCP_RELAY_REQ_CHK,
+
+/* "dhcp_relay_resp_chk(relay_ip, server_ip)".
+ *
+ * Arguments follow the action_header, in this format:
+ *   - The 32-bit DHCP relay IP.
+ *   - The 32-bit DHCP server IP.
+ */
+ACTION_OPCODE_DHCP_RELAY_RESP_CHK,
 };
 
 /* Header. */
diff --git a/lib/actions.c b/lib/actions.c
index 361d55009..6cd60366a 100644
--- a/lib/actions.c
+++ b/lib/actions.c
@@ -1869,6 +1869,8 @@ is_paused_nested_action(enum action_opcode opcode)
 case ACTION_OPCODE_BFD_MSG:
 case ACTION_OPCODE_ACTIVATION_STRATEGY_RARP:
 case ACTION_OPCODE_MG_SPLIT_BUF:
+case ACTION_OPCODE_DHCP_RELAY_REQ_CHK:
+case ACTION_OPCODE_DHCP_RELAY_RESP_CHK:
 default:
 return false;
 }
@@ -2610,6 +2612,114 @@ ovnact_controller_event_free(struct 
ovnact_controller_event *event)
 free_gen_options(event->options, event->n_options);
 }
 
+static void
+format_dhcpv4_relay_chk(const char *name,
+const struct ovnact_dhcp_relay *dhcp_relay,
+struct ds *s)
+{
+expr_field_format(_relay->dst, s);
+ds_put_format(s, " = %s("IP_FMT", "IP_FMT");",
+  name,
+  IP_ARGS(dhcp_relay->relay_ipv4),
+  IP_ARGS(dhcp_relay->server_ipv4));
+}
+
+static void
+parse_dhcp_relay_chk(struct action_context *ctx,
+ const struct expr_field *dst,
+ struct ovnact_dhcp_relay *dhcp_relay)
+{
+/* Skip dhcp_relay_req_chk/dhcp_relay_resp_chk( */
+lexer_force_match(ctx->lexer, LEX_T_LPAREN);
+
+/* Validate that the destination is a 1-bit, modifiable field. */
+char *error = expr_type_check(dst, 1, true, ctx->scope);
+if (error) {
+lexer_error(ctx->lexer, "%s", error);
+free(error);
+return;
+}
+dhcp_relay->dst = *dst;
+
+/* Parse relay ip and server ip. */
+if (ctx->lexer->token.format == LEX_F_IPV4) {
+dhcp_relay->family = AF_INET;
+dhcp_relay->relay_ipv4 = ctx->lexer->token.value.ipv4;
+lexer_get(ctx->lexer);
+lexer_match(ctx->lexer, LEX_T_COMMA);
+if (ctx->lexer->token.format == LEX_F_IPV4) {
+dhcp_relay->family = AF_INET;
+dhcp_relay->server_ipv4 = ctx->lexer->token.value.ipv4;
+lexer_get(ctx->lexer);
+} else {
+lexer_syntax_error(ctx->lexer, "

[ovs-dev] [PATCH OVN v6 0/3] DHCP Relay Agent support for overlay subnets.

2024-04-24 Thread Naveen Yerramneni
p-add ls0 vif0
 ovn-nbctl lsp-set-addresses vif0  #Only MAC address has to be 
specified when logical ports are created.
 ovn-nbctl lsp-add ls0 lrp1-attachment
 ovn-nbctl lsp-set-type lrp1-attachment router
 ovn-nbctl lsp-set-addresses lrp1-attachment
 ovn-nbctl lsp-set-options lrp1-attachment router-port=lrp1
 ovn-nbctl lr-add lr0
 ovn-nbctl lrp-add lr0 lrp1   #GATEWAY IP is set in 
GIADDR field when relaying the DHCP requests to server.
 ovn-nbctl lrp-add lr0 lrp-ext  
 ovn-nbctl ls-add ls-ext
 ovn-nbctl lsp-add ls-ext lrp-ext-attachment
 ovn-nbctl lsp-set-type lrp-ext-attachment router
 ovn-nbctl lsp-set-addresses lrp-ext-attachment
 ovn-nbctl lsp-set-options lrp-ext-attachment router-port=lrp-ext
 ovn-nbctl lsp-add ls-ext ln_port
 ovn-nbctl lsp-set-addresses ln_port unknown
 ovn-nbctl lsp-set-type ln_port localnet
 ovn-nbctl lsp-set-options ln_port network_name=physnet1
 # Enable DHCP Relay feature
 ovn-nbctl create DHCP_Relay name=dhcp_relay_test servers=
 ovn-nbctl set Logical_Router_port lrp1 dhcp_relay=
 ovn-nbctl set Logical_Switch ls0 
other_config:dhcp_relay_port=lrp1-attachment

Limitations:

  - All OVN features that needs IP address to be configured on logical port 
(like proxy arp, etc) will not be supported for overlay subnets on which DHCP 
relay is enabled.

References:
--
  - rfc1541, rfc1542, rfc2131

V1:
  - First patch.

V2:
  - Addressed review comments from Numan.

V3:
  - Split the patch into series.
  - Addressed review comments from Numan.
  - Updated the match condition for DHCP Relay flows.

V4:
  - Fix sparse errors.
  - Reorder patch series.

V5:
  - Fix test failures.

V6:
  - Addressed review comments from Numan.
  - Increment NB schema version.

Naveen Yerramneni (3):
  actions: DHCP Relay Agent support for overlay IPv4 subnets.
  controller: DHCP Relay Agent support for overlay IPv4 subnets.
  northd, tests: DHCP Relay Agent support for overlay IPv4 subnets.

 controller/pinctrl.c| 597 +++-
 include/ovn/actions.h   |  27 ++
 lib/actions.c   | 116 
 lib/ovn-l7.h|   2 +
 northd/northd.c | 271 +-
 northd/northd.h |  41 +--
 northd/ovn-northd.8.xml | 211 --
 ovn-nb.ovsschema|  21 +-
 ovn-nb.xml  |  39 +++
 ovn-sb.xml  |  62 +
 tests/atlocal.in|   3 +
 tests/ovn-northd.at |  38 +++
 tests/ovn.at| 258 -
 tests/system-ovn.at | 148 ++
 utilities/ovn-trace.c   |  67 +
 15 files changed, 1784 insertions(+), 117 deletions(-)

-- 
2.36.6

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[meta-intel] [PATCH v2] openvino.md: Add document to build image with OpenVINO toolkit

2024-04-24 Thread Naveen Saini
Signed-off-by: Naveen Saini 
---
 README.md |  1 +
 documentation/openvino.md | 95 +++
 2 files changed, 96 insertions(+)
 create mode 100644 documentation/openvino.md

diff --git a/README.md b/README.md
index 3ec3992b..91577f1d 100644
--- a/README.md
+++ b/README.md
@@ -24,6 +24,7 @@ Dynamic additional dependencies:
 
 - [Building and booting meta-intel BSP 
layers](documentation/building_and_booting.md)
 - [Intel oneAPI DPC++/C++ Compiler](documentation/dpcpp-compiler.md)
+- [Build Image with OpenVINO™ toolkit](documentation/openvino.md)
 - [Tested Hardware](documentation/tested_hardware.md)
 - [Guidelines for submitting patches](documentation/submitting_patches.md)
 - [Reporting bugs](documentation/reporting_bugs.md)
diff --git a/documentation/openvino.md b/documentation/openvino.md
new file mode 100644
index ..50dc680d
--- /dev/null
+++ b/documentation/openvino.md
@@ -0,0 +1,95 @@
+Build a Yocto Image with OpenVINO™ toolkit
+==
+
+Follow the [Yocto Project official 
documentation](https://docs.yoctoproject.org/brief-yoctoprojectqs/index.html#compatible-linux-distribution)
 to set up and configure your host machine to be compatible with BitBake.
+
+## Step 1: Set Up Environment
+
+1. Clone the repositories.
+
+```
+  git clone https://git.yoctoproject.org/git/poky
+  git clone https://github.com/openembedded/meta-openembedded
+  git clone https://git.yoctoproject.org/git/meta-intel
+```
+
+
+2. Set up the OpenEmbedded build environment.
+
+```
+  source poky/oe-init-build-env
+
+```
+
+
+
+3. Add BitBake layers.
+
+
+```
+  bitbake-layers add-layer ../meta-openembedded/meta-oe
+  bitbake-layers add-layer ../meta-openembedded/meta-python
+  bitbake-layers add-layer ../meta-intel
+
+```
+
+
+4. Set up BitBake configurations.
+   Include extra configuration in the `conf/local.conf` file in your build 
directory as required.
+
+
+```
+  MACHINE = "intel-skylake-64"
+
+  # Enable building OpenVINO Python API.
+  # This requires meta-python layer to be included in bblayers.conf.
+  PACKAGECONFIG:append:pn-openvino-inference-engine = " python3"
+
+  # This adds OpenVINO related libraries in the target image.
+  CORE_IMAGE_EXTRA_INSTALL:append = " openvino-inference-engine"
+
+  # This adds OpenVINO samples in the target image.
+  CORE_IMAGE_EXTRA_INSTALL:append = " openvino-inference-engine-samples"
+
+  # Include OpenVINO Python API package in the target image.
+  CORE_IMAGE_EXTRA_INSTALL:append = " openvino-inference-engine-python3"
+
+  # Include model conversion API in the target image.
+  CORE_IMAGE_EXTRA_INSTALL:append = " openvino-model-optimizer"
+
+```
+
+## Step 2: Build a Yocto Image with OpenVINO Packages
+
+Run BitBake to build your image with OpenVINO packages. For example, to build 
the minimal image, run the following command:
+
+
+```
+   bitbake core-image-minimal
+
+```
+
+## Step 3: Verify the Yocto Image
+
+Verify that OpenVINO packages were built successfully. Run the following 
command:
+
+```
+   oe-pkgdata-util list-pkgs | grep openvino
+
+```
+
+
+If the image build is successful, it will return the list of packages as below:
+
+```
+   openvino-inference-engine
+   openvino-inference-engine-dbg
+   openvino-inference-engine-dev
+   openvino-inference-engine-python3
+   openvino-inference-engine-samples
+   openvino-inference-engine-src
+   openvino-model-optimizer
+   openvino-model-optimizer-dbg
+   openvino-model-optimizer-dev
+
+```
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8292): 
https://lists.yoctoproject.org/g/meta-intel/message/8292
Mute This Topic: https://lists.yoctoproject.org/mt/105707014/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[meta-intel] [PATCH] openvino.md: Add document to build & install OpenVINO

2024-04-23 Thread Naveen Saini
Signed-off-by: Naveen Saini 
---
 documentation/openvino.md | 95 +++
 1 file changed, 95 insertions(+)
 create mode 100644 documentation/openvino.md

diff --git a/documentation/openvino.md b/documentation/openvino.md
new file mode 100644
index ..3b6db364
--- /dev/null
+++ b/documentation/openvino.md
@@ -0,0 +1,95 @@
+Create a Yocto Image with OpenVINO™ toolkit
+===
+
+Follow the [Yocto Project official 
documentation](https://docs.yoctoproject.org/brief-yoctoprojectqs/index.html#compatible-linux-distribution)
 to set up and configure your host machine to be compatible with BitBake.
+
+## Step 1: Set Up Environment
+
+1. Clone the repositories.
+
+```
+  git clone https://git.yoctoproject.org/git/poky
+  git clone https://github.com/openembedded/meta-openembedded
+  git clone https://git.yoctoproject.org/git/meta-intel
+```
+
+
+2. Set up the OpenEmbedded build environment.
+
+```
+  source poky/oe-init-build-env
+
+```
+
+
+
+3. Add BitBake layers.
+
+
+```
+  bitbake-layers add-layer ../meta-openembedded/meta-oe
+  bitbake-layers add-layer ../meta-openembedded/meta-python
+  bitbake-layers add-layer ../meta-intel
+
+```
+
+
+4. Set up BitBake configurations.
+   Include extra configuration in the `conf/local.conf` file in your build 
directory as required.
+
+
+```
+  MACHINE = "intel-skylake-64"
+
+  # Enable building OpenVINO Python API.
+  # This requires meta-python layer to be included in bblayers.conf.
+  PACKAGECONFIG:append:pn-openvino-inference-engine = " python3"
+
+  # This adds OpenVINO related libraries in the target image.
+  CORE_IMAGE_EXTRA_INSTALL:append = " openvino-inference-engine"
+
+  # This adds OpenVINO samples in the target image.
+  CORE_IMAGE_EXTRA_INSTALL:append = " openvino-inference-engine-samples"
+
+  # Include OpenVINO Python API package in the target image.
+  CORE_IMAGE_EXTRA_INSTALL:append = " openvino-inference-engine-python3"
+
+  # Include model conversion API in the target image.
+  CORE_IMAGE_EXTRA_INSTALL:append = " openvino-model-optimizer"
+
+```
+
+## Step 2: Build a Yocto Image with OpenVINO Packages
+
+Run BitBake to build your image with OpenVINO packages. For example, to build 
the minimal image, run the following command:
+
+
+```
+   bitbake core-image-minimal
+
+```
+
+## Step 3: Verify the Yocto Image
+
+Verify that OpenVINO packages were built successfully. Run the following 
command:
+
+```
+   oe-pkgdata-util list-pkgs | grep openvino
+
+```
+
+
+If the image build is successful, it will return the list of packages as below:
+
+```
+   openvino-inference-engine
+   openvino-inference-engine-dbg
+   openvino-inference-engine-dev
+   openvino-inference-engine-python3
+   openvino-inference-engine-samples
+   openvino-inference-engine-src
+   openvino-model-optimizer
+   openvino-model-optimizer-dbg
+   openvino-model-optimizer-dev
+
+```
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8291): 
https://lists.yoctoproject.org/g/meta-intel/message/8291
Mute This Topic: https://lists.yoctoproject.org/mt/105705119/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [PATCH v3 0/5] Add generic data patching functions

2024-04-23 Thread Naveen N Rao
On Mon, Mar 25, 2024 at 04:52:57PM +1100, Benjamin Gray wrote:
> Currently patch_instruction() bases the write length on the value being
> written. If the value looks like a prefixed instruction it writes 8 bytes,
> otherwise it writes 4 bytes. This makes it potentially buggy to use for
> writing arbitrary data, as if you want to write 4 bytes but it decides to
> write 8 bytes it may clobber the following memory or be unaligned and
> trigger an oops if it tries to cross a page boundary.
> 
> To solve this, this series pulls out the size parameter to the 'top' of
> the memory patching logic, and propagates it through the various functions.
> 
> The two sizes supported are int and long; this allows for patching
> instructions and pointers on both ppc32 and ppc64. On ppc32 these are the
> same size, so care is taken to only use the size parameter on static
> functions, so the compiler can optimise it out entirely. Unfortunately
> GCC trips over its own feet here and won't optimise in a way that is
> optimal for strict RWX (mpc85xx_smp_defconfig) and no RWX
> (pmac32_defconfig). More details in the v2 cover letter.
> 
> Changes from v2:
>   * Various changes noted on each patch
>   * Data patching now enforced to be aligned
>   * Restore page aligned flushing optimisation
> 
> Changes from v1:
>   * Addressed the v1 review actions
>   * Removed noinline (for now)
> 
> v2: 
> https://patchwork.ozlabs.org/project/linuxppc-dev/cover/20231016050147.115686-1-bg...@linux.ibm.com/
> v1: 
> https://patchwork.ozlabs.org/project/linuxppc-dev/cover/20230207015643.590684-1-bg...@linux.ibm.com/
> 
> Benjamin Gray (5):
>   powerpc/code-patching: Add generic memory patching
>   powerpc/code-patching: Add data patch alignment check
>   powerpc/64: Convert patch_instruction() to patch_u32()
>   powerpc/32: Convert patch_instruction() to patch_uint()
>   powerpc/code-patching: Add boot selftest for data patching
> 
>  arch/powerpc/include/asm/code-patching.h | 37 +
>  arch/powerpc/kernel/module_64.c  |  5 +-
>  arch/powerpc/kernel/static_call.c|  2 +-
>  arch/powerpc/lib/code-patching.c | 70 +++-
>  arch/powerpc/lib/test-code-patching.c| 36 
>  arch/powerpc/platforms/powermac/smp.c|  2 +-
>  6 files changed, 132 insertions(+), 20 deletions(-)

Apart from the minor comments, for this series:
Acked-by: Naveen N Rao 

Thanks for working on this.


- Naveen



Re: [PATCH v3 3/5] powerpc/64: Convert patch_instruction() to patch_u32()

2024-04-23 Thread Naveen N Rao
On Mon, Mar 25, 2024 at 04:53:00PM +1100, Benjamin Gray wrote:
> This use of patch_instruction() is working on 32 bit data, and can fail
> if the data looks like a prefixed instruction and the extra write
> crosses a page boundary. Use patch_u32() to fix the write size.
> 
> Fixes: 8734b41b3efe ("powerpc/module_64: Fix livepatching for RO modules")
> Link: https://lore.kernel.org/all/20230203004649.1f59dbd4@yea/
> Signed-off-by: Benjamin Gray 
> 
> ---
> 
> v2: * Added the fixes tag, it seems appropriate even if the subject does
>   mention a more robust solution being required.
> 
> patch_u64() should be more efficient, but judging from the bug report
> it doesn't seem like the data is doubleword aligned.

Asking again, is that still the case? It looks like at least the first 
fix below can be converted to patch_u64().

- Naveen

> ---
>  arch/powerpc/kernel/module_64.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
> index 7112adc597a8..e9bab599d0c2 100644
> --- a/arch/powerpc/kernel/module_64.c
> +++ b/arch/powerpc/kernel/module_64.c
> @@ -651,12 +651,11 @@ static inline int create_stub(const Elf64_Shdr *sechdrs,
>   // func_desc_t is 8 bytes if ABIv2, else 16 bytes
>   desc = func_desc(addr);
>   for (i = 0; i < sizeof(func_desc_t) / sizeof(u32); i++) {
> - if (patch_instruction(((u32 *)>funcdata) + i,
> -   ppc_inst(((u32 *)())[i])))
> + if (patch_u32(((u32 *)>funcdata) + i, ((u32 *))[i]))
>   return 0;
>   }
>  
> - if (patch_instruction(>magic, ppc_inst(STUB_MAGIC)))
> + if (patch_u32(>magic, STUB_MAGIC))
>   return 0;
>  
>   return 1;
> -- 
> 2.44.0
> 


Re: [PATCH v3 5/5] powerpc/code-patching: Add boot selftest for data patching

2024-04-23 Thread Naveen N Rao
On Mon, Mar 25, 2024 at 04:53:02PM +1100, Benjamin Gray wrote:
> Extend the code patching selftests with some basic coverage of the new
> data patching variants too.
> 
> Signed-off-by: Benjamin Gray 
> 
> ---
> 
> v3: * New in v3
> ---
>  arch/powerpc/lib/test-code-patching.c | 36 +++
>  1 file changed, 36 insertions(+)
> 
> diff --git a/arch/powerpc/lib/test-code-patching.c 
> b/arch/powerpc/lib/test-code-patching.c
> index c44823292f73..e96c48fcd4db 100644
> --- a/arch/powerpc/lib/test-code-patching.c
> +++ b/arch/powerpc/lib/test-code-patching.c
> @@ -347,6 +347,41 @@ static void __init test_prefixed_patching(void)
>   check(!memcmp(iptr, expected, sizeof(expected)));
>  }
>  
> +static void __init test_data_patching(void)
> +{
> + void *buf;
> + u32 *addr32;
> +
> + buf = vzalloc(PAGE_SIZE);
> + check(buf);
> + if (!buf)
> + return;
> +
> + addr32 = buf + 128;
> +
> + addr32[1] = 0xA0A1A2A3;
> + addr32[2] = 0xB0B1B2B3;
> +
> + patch_uint([1], 0xC0C1C2C3);
> +
> + check(addr32[0] == 0);
> + check(addr32[1] == 0xC0C1C2C3);
> + check(addr32[2] == 0xB0B1B2B3);
> + check(addr32[3] == 0);
> +
> + patch_ulong([1], 0xD0D1D2D3);
> +
> + check(addr32[0] == 0);
> + *(unsigned long *)([1]) = 0xD0D1D2D3;

Should that have been a check() instead?

- Naveen

> +
> + if (!IS_ENABLED(CONFIG_PPC64))
> + check(addr32[2] == 0xB0B1B2B3);
> +
> + check(addr32[3] == 0);
> +
> + vfree(buf);
> +}
> +
>  static int __init test_code_patching(void)
>  {
>   pr_info("Running code patching self-tests ...\n");
> @@ -356,6 +391,7 @@ static int __init test_code_patching(void)
>   test_create_function_call();
>   test_translate_branch();
>   test_prefixed_patching();
> + test_data_patching();
>  
>   return 0;
>  }
> -- 
> 2.44.0
> 


Re: [PATCH v5 RESEND] arch/powerpc/kvm: Add support for reading VPA counters for pseries guests

2024-04-22 Thread Naveen N Rao
u64 l1_to_l2_ns, l2_to_l1_ns, l2_runtime_ns;
> +
> + l1_to_l2_ns = tb_to_ns(be64_to_cpu(lp->l1_to_l2_cs_tb));
> + l2_to_l1_ns = tb_to_ns(be64_to_cpu(lp->l2_to_l1_cs_tb));
> + l2_runtime_ns = tb_to_ns(be64_to_cpu(lp->l2_runtime_tb));
> + trace_kvmppc_vcpu_exit_cs_time(vcpu, l1_to_l2_ns, l2_to_l1_ns,
> + l2_runtime_ns);
> + lp->l1_to_l2_cs_tb = 0;
> + lp->l2_to_l1_cs_tb = 0;
> + lp->l2_runtime_tb = 0;
> + kvmhv_set_l2_accumul(0);
> +
> + // Maintain an aggregate of context switch times
> + vcpu->arch.l1_to_l2_cs_agg += l1_to_l2_ns;
> + vcpu->arch.l2_to_l1_cs_agg += l2_to_l1_ns;
> + vcpu->arch.l2_runtime_agg += l2_runtime_ns;
> +}
> +
>  static int kvmhv_vcpu_entry_nestedv2(struct kvm_vcpu *vcpu, u64 time_limit,
>unsigned long lpcr, u64 *tb)
>  {
> @@ -4130,6 +4161,11 @@ static int kvmhv_vcpu_entry_nestedv2(struct kvm_vcpu 
> *vcpu, u64 time_limit,
>   kvmppc_gse_put_u64(io->vcpu_run_input, KVMPPC_GSID_LPCR, lpcr);
>  
>   accumulate_time(vcpu, >arch.in_guest);
> +
> + /* Enable the guest host context switch time tracking */
> + if (unlikely(trace_kvmppc_vcpu_exit_cs_time_enabled()))
> + kvmhv_set_l2_accumul(1);
> +
>   rc = plpar_guest_run_vcpu(0, vcpu->kvm->arch.lpid, vcpu->vcpu_id,
> , );
>  
> @@ -4156,6 +4192,10 @@ static int kvmhv_vcpu_entry_nestedv2(struct kvm_vcpu 
> *vcpu, u64 time_limit,
>  
>   timer_rearm_host_dec(*tb);
>  
> + /* Record context switch and guest_run_time data */
> + if (kvmhv_get_l2_accumul())
> + do_trace_nested_cs_time(vcpu);
> +
>   return trap;
>  }

I'm assuming the counters in VPA are cumulative, since you are zero'ing 
them out on exit. If so, I think a better way to implement this is to 
use TRACE_EVENT_FN() and provide tracepoint registration and 
unregistration functions. You can then enable the counters once during 
registration and avoid repeated writes to the VPA area. With that, you 
also won't need to do anything before vcpu entry. If you maintain 
previous values, you can calculate the delta and emit the trace on vcpu 
exit. The values in VPA area can then serve as the cumulative values.

>  
> diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h
> index 8d57c8428531..ab19977c91b4 100644
> --- a/arch/powerpc/kvm/trace_hv.h
> +++ b/arch/powerpc/kvm/trace_hv.h
> @@ -491,6 +491,31 @@ TRACE_EVENT(kvmppc_run_vcpu_enter,
>   TP_printk("VCPU %d: tgid=%d", __entry->vcpu_id, __entry->tgid)
>  );
>  
> +TRACE_EVENT(kvmppc_vcpu_exit_cs_time,

Not sure what "exit" signifies in the tracepoint name. Can this be 
simplified to kvmppc_vcpu_cs_time? Perhaps kvmppc_vcpu_stats, which will 
allow more vcpu stats to be exposed in future as necessary?

> + TP_PROTO(struct kvm_vcpu *vcpu, u64 l1_to_l2_cs, u64 l2_to_l1_cs,
> + u64 l2_runtime),

Can be on a single line, we no longer restrict lines to 80 columns. 100 
or so is fine.

> +
> + TP_ARGS(vcpu, l1_to_l2_cs, l2_to_l1_cs, l2_runtime),
> +
> + TP_STRUCT__entry(
> + __field(int,vcpu_id)
> + __field(__u64,  l1_to_l2_cs_ns)
> + __field(__u64,  l2_to_l1_cs_ns)
> + __field(__u64,  l2_runtime_ns)

Not sure there is a reason to use __u64 - just u64 should work.

> + ),
> +
> + TP_fast_assign(
> + __entry->vcpu_id  = vcpu->vcpu_id;
> + __entry->l1_to_l2_cs_ns = l1_to_l2_cs;
> + __entry->l2_to_l1_cs_ns = l2_to_l1_cs;
> + __entry->l2_runtime_ns = l2_runtime;
> + ),
> +
> + TP_printk("VCPU %d: l1_to_l2_cs_time=%llu-ns l2_to_l1_cs_time=%llu-ns 
> l2_runtime=%llu-ns",
 ^^^
You can drop the hyphen before "ns". Just put a space there.

> + __entry->vcpu_id,  __entry->l1_to_l2_cs_ns,
> + __entry->l2_to_l1_cs_ns, __entry->l2_runtime_ns)

There is l1_to_l2_cs, l1_to_l2_cs_ns and l1_to_l2_cs_time - can you use 
a single name for that?

> +);
> +

As a minor nit, it will be good to put the new tracepoint after the 
below vcpu exit tracepoint just so the entry/exit tracepoints are 
together in the file.

>  TRACE_EVENT(kvmppc_run_vcpu_exit,
>   TP_PROTO(struct kvm_vcpu *vcpu),
>  
> -- 
> 2.43.2
> 


- Naveen


Re: Temporary queue in Artemis active MQ

2024-04-22 Thread Naveen kumar
Hi Team,

Any update on below please ?

Regards,
Naveen 

> On 16 Apr 2024, at 11:59 AM, Naveen kumar  wrote:
> 
> Hi Team,
> 
> We have the below question on temporary queues in Artemis MQ in eks . Could 
> you please help us with answer for the below ones,
> 
> 1. When are temporary queues used ?
> 2. How are temporary queues created ?
> 3. Is there any API call to create temporary queue using JMX?
> 4. How can granular access management for temporary queues are specified
> 
> 
> Regards,
> Naveen


Re: Re: [ANNOUNCE] New Committer: Simhadri Govindappa

2024-04-18 Thread Naveen Gangam
Congrats Simhadri. Looking forward to many more contributions in the future.

On Thu, Apr 18, 2024 at 12:25 PM Sai Hemanth Gantasala
 wrote:

> Congratulations Simhadri  well deserved
>
> On Thu, Apr 18, 2024 at 8:41 AM Pau Tallada  wrote:
>
>> Congratulations
>>
>> Missatge de Alessandro Solimando  del
>> dia dj., 18 d’abr. 2024 a les 17:40:
>>
>>> Great news, Simhadri, very well deserved!
>>>
>>> On Thu, 18 Apr 2024 at 15:07, Simhadri G  wrote:
>>>
 Thanks everyone!
 I really appreciate it, it means a lot to me :)
 The Apache Hive project and its community have truly inspired me . I'm
 grateful for the chance to contribute to such a remarkable project.

 Thanks!
 Simhadri Govindappa

 On Thu, Apr 18, 2024 at 6:18 PM Sankar Hariappan
  wrote:

> Congrats Simhadri!
>
>
>
> -Sankar
>
>
>
> *From:* Butao Zhang 
> *Sent:* Thursday, April 18, 2024 5:39 PM
> *To:* u...@hive.apache.org; dev 
> *Subject:* [EXTERNAL] Re: [ANNOUNCE] New Committer: Simhadri
> Govindappa
>
>
>
> You don't often get email from butaozha...@163.com. Learn why this is
> important 
>
> Congratulations Simhadri !!!
>
>
>
> Thanks.
>
>
> --
>
> *发件人**:* user-return-28075-butaozhang1=163@hive.apache.org <
> user-return-28075-butaozhang1=163@hive.apache.org> 代表 Ayush
> Saxena 
> *发送时间**:* 星期四, 四月 18, 2024 7:50 下午
> *收件人**:* dev ; u...@hive.apache.org <
> u...@hive.apache.org>
> *主题**:* [ANNOUNCE] New Committer: Simhadri Govindappa
>
>
>
> Hi All,
>
> Apache Hive's Project Management Committee (PMC) has invited Simhadri
> Govindappa to become a committer, and we are pleased to announce that he
> has accepted.
>
>
>
> Please join me in congratulating him, Congratulations Simhadri,
> Welcome aboard!!!
>
>
>
> -Ayush Saxena
>
> (On behalf of Apache Hive PMC)
>

>>
>> --
>> --
>> Pau Tallada Crespí
>> Departament de Serveis
>> Port d'Informació Científica (PIC)
>> Tel: +34 93 170 2729
>> --
>>
>>


Re: Re: [ANNOUNCE] New Committer: Simhadri Govindappa

2024-04-18 Thread Naveen Gangam
Congrats Simhadri. Looking forward to many more contributions in the future.

On Thu, Apr 18, 2024 at 12:25 PM Sai Hemanth Gantasala
 wrote:

> Congratulations Simhadri  well deserved
>
> On Thu, Apr 18, 2024 at 8:41 AM Pau Tallada  wrote:
>
>> Congratulations
>>
>> Missatge de Alessandro Solimando  del
>> dia dj., 18 d’abr. 2024 a les 17:40:
>>
>>> Great news, Simhadri, very well deserved!
>>>
>>> On Thu, 18 Apr 2024 at 15:07, Simhadri G  wrote:
>>>
 Thanks everyone!
 I really appreciate it, it means a lot to me :)
 The Apache Hive project and its community have truly inspired me . I'm
 grateful for the chance to contribute to such a remarkable project.

 Thanks!
 Simhadri Govindappa

 On Thu, Apr 18, 2024 at 6:18 PM Sankar Hariappan
  wrote:

> Congrats Simhadri!
>
>
>
> -Sankar
>
>
>
> *From:* Butao Zhang 
> *Sent:* Thursday, April 18, 2024 5:39 PM
> *To:* user@hive.apache.org; dev 
> *Subject:* [EXTERNAL] Re: [ANNOUNCE] New Committer: Simhadri
> Govindappa
>
>
>
> You don't often get email from butaozha...@163.com. Learn why this is
> important 
>
> Congratulations Simhadri !!!
>
>
>
> Thanks.
>
>
> --
>
> *发件人**:* user-return-28075-butaozhang1=163@hive.apache.org <
> user-return-28075-butaozhang1=163@hive.apache.org> 代表 Ayush
> Saxena 
> *发送时间**:* 星期四, 四月 18, 2024 7:50 下午
> *收件人**:* dev ; user@hive.apache.org <
> user@hive.apache.org>
> *主题**:* [ANNOUNCE] New Committer: Simhadri Govindappa
>
>
>
> Hi All,
>
> Apache Hive's Project Management Committee (PMC) has invited Simhadri
> Govindappa to become a committer, and we are pleased to announce that he
> has accepted.
>
>
>
> Please join me in congratulating him, Congratulations Simhadri,
> Welcome aboard!!!
>
>
>
> -Ayush Saxena
>
> (On behalf of Apache Hive PMC)
>

>>
>> --
>> --
>> Pau Tallada Crespí
>> Departament de Serveis
>> Port d'Informació Científica (PIC)
>> Tel: +34 93 170 2729
>> --
>>
>>


Temporary queue in Artemis active MQ

2024-04-16 Thread Naveen kumar
Hi Team,

We have the below question on temporary queues in Artemis MQ in eks . Could you 
please help us with answer for the below ones,

1. When are temporary queues used ?
2. How are temporary queues created ?
3. Is there any API call to create temporary queue using JMX?
4. How can granular access management for temporary queues are specified 


Regards,
Naveen 

Re: [PATCH v3 2/2] powerpc/bpf: enable kfunc call

2024-04-15 Thread Naveen N Rao
On Tue, Apr 02, 2024 at 04:28:06PM +0530, Hari Bathini wrote:
> Currently, bpf jit code on powerpc assumes all the bpf functions and
> helpers to be kernel text. This is false for kfunc case, as function
> addresses can be module addresses as well. So, ensure module addresses
> are supported to enable kfunc support.
> 
> Emit instructions based on whether the function address is kernel text
> address or module address to retain optimized instruction sequence for
> kernel text address case.
> 
> Also, as bpf programs are always module addresses and a bpf helper can
> be within kernel address as well, using relative addressing often fails
> with "out of range of pcrel address" error. Use unoptimized instruction
> sequence for both kernel and module addresses to work around this, when
> PCREL addressing is used.

I guess we need a fixes tag for this?
Fixes: 7e3a68be42e1 ("powerpc/64: vmlinux support building with PCREL 
addresing")

It will be good to separate out this fix into a separate patch.

Also, I know I said we could use the generic PPC_LI64() for pcrel, but 
we may be able to use a more optimized sequence when calling bpf kernel 
helpers.  See stub_insns[] in module_64.c for an example where we load 
paca->kernelbase, then use a prefixed load instruction to populate the 
lower 34-bit value. For calls out to module area, we can use the generic 
PPC_LI64() macro only if it is outside range of a prefixed load 
instruction.

> 
> With module addresses supported, override bpf_jit_supports_kfunc_call()
> to enable kfunc support. Since module address offsets can be more than
> 32-bit long on PPC64, override bpf_jit_supports_far_kfunc_call() to
> enable 64-bit pointers.
> 
> Signed-off-by: Hari Bathini 
> ---
> 
> * Changes in v3:
>   - Retained optimized instruction sequence when function address is
> a core kernel address as suggested by Naveen.
>   - Used unoptimized instruction sequence for PCREL addressing to
> avoid out of range errors for core kernel function addresses.
>   - Folded patch that adds support for kfunc calls with patch that
> enables/advertises this support as suggested by Naveen.
> 
> 
>  arch/powerpc/net/bpf_jit_comp.c   | 10 +++
>  arch/powerpc/net/bpf_jit_comp64.c | 48 ---
>  2 files changed, 42 insertions(+), 16 deletions(-)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 0f9a21783329..dc7ffafd7441 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -359,3 +359,13 @@ void bpf_jit_free(struct bpf_prog *fp)
>  
>   bpf_prog_unlock_free(fp);
>  }
> +
> +bool bpf_jit_supports_kfunc_call(void)
> +{
> + return true;
> +}
> +
> +bool bpf_jit_supports_far_kfunc_call(void)
> +{
> + return IS_ENABLED(CONFIG_PPC64) ? true : false;
> +}
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c 
> b/arch/powerpc/net/bpf_jit_comp64.c
> index 7f62ac4b4e65..ec3adf715c55 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -207,24 +207,14 @@ static int bpf_jit_emit_func_call_hlp(u32 *image, 
> struct codegen_context *ctx, u
>   unsigned long func_addr = func ? ppc_function_entry((void *)func) : 0;
>   long reladdr;
>  
> - if (WARN_ON_ONCE(!core_kernel_text(func_addr)))
> + /*
> +  * With the introduction of kfunc feature, BPF helpers can be part of 
> kernel as
> +  * well as module text address.
> +  */
> + if (WARN_ON_ONCE(!kernel_text_address(func_addr)))
>   return -EINVAL;
>  
> - if (IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) {
> - reladdr = func_addr - CTX_NIA(ctx);
> -
> - if (reladdr >= (long)SZ_8G || reladdr < -(long)SZ_8G) {
> - pr_err("eBPF: address of %ps out of range of pcrel 
> address.\n",
> - (void *)func);
> - return -ERANGE;
> - }
> - /* pla r12,addr */
> - EMIT(PPC_PREFIX_MLS | __PPC_PRFX_R(1) | IMM_H18(reladdr));
> - EMIT(PPC_INST_PADDI | ___PPC_RT(_R12) | IMM_L(reladdr));
> - EMIT(PPC_RAW_MTCTR(_R12));
> - EMIT(PPC_RAW_BCTR());
> -
> - } else {
> + if (core_kernel_text(func_addr) && 
> !IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) {
>   reladdr = func_addr - kernel_toc_addr();
>   if (reladdr > 0x7FFF || reladdr < -(0x8000L)) {
>   pr_err("eBPF: address of %ps out of range of 
> kernel_toc.\n", (void *)func);
> @@ -235,6 +225,32 @@ static int bpf_jit_emit_func_call_hlp(u32 *image, struct 
> codeg

[meta-intel] [PATCH] lms: use python3native and depend on python3-packaging-native

2024-04-10 Thread Naveen Saini
Recipe incorrectly using python from host, which causing
following failure:
| import packaging.version
| ModuleNotFoundError: No module named 'packaging.version'

Ref:
https://git.yoctoproject.org/poky/commit/?id=bb4abe0e6468f8be3fdd6012a109ddd1db7b20a8

Signed-off-by: Naveen Saini 
---
 .../openembedded-layer/recipes-bsp/amt/lms_2406.0.0.0.bb| 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git 
a/dynamic-layers/openembedded-layer/recipes-bsp/amt/lms_2406.0.0.0.bb 
b/dynamic-layers/openembedded-layer/recipes-bsp/amt/lms_2406.0.0.0.bb
index 63b69ce8..bdf32576 100644
--- a/dynamic-layers/openembedded-layer/recipes-bsp/amt/lms_2406.0.0.0.bb
+++ b/dynamic-layers/openembedded-layer/recipes-bsp/amt/lms_2406.0.0.0.bb
@@ -10,11 +10,9 @@ COMPATIBLE_HOST = '(i.86|x86_64).*-linux'
 
 COMPATIBLE_HOST:libc-musl = "null"
 
-inherit cmake systemd features_check
+inherit cmake systemd features_check python3native
 
-DEPENDS = "metee ace xerces-c libnl libxml2 glib-2.0 glib-2.0-native 
pkgconfig-native"
-
-EXTRA_OECMAKE += "-DPYTHON_EXECUTABLE=${HOSTTOOLS_DIR}/python3"
+DEPENDS = "metee ace xerces-c libnl libxml2 glib-2.0 glib-2.0-native 
pkgconfig-native python3-packaging-native"
 
 # Enable either connman or networkmanager or none but not both.
 PACKAGECONFIG ??= "connman"
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8276): 
https://lists.yoctoproject.org/g/meta-intel/message/8276
Mute This Topic: https://lists.yoctoproject.org/mt/105454986/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[Crash-utility] How to get module symbols working?

2024-04-05 Thread Naveen Chaudhary
I am analyzing the kdump in latest crash utility 8.0.4++.

I think I loaded the module symbols correctly :
crash> mod
 MODULE   NAME  TEXT_BASE   SIZE  OBJECT FILE
80007a7e2040  npdereference  80007a7e  12288  (not loaded)  
[CONFIG_KALLSYMS]
crash>
crash> mod -s npdereference 
/home/naveen/.repos/src/arm64/linux/drivers/naveen/npdereference.ko
 MODULE   NAME  TEXT_BASE   SIZE  OBJECT FILE
80007a7e2040  npdereference  80007a7e  12288  
/home/naveen/.repos/src/arm64/linux/drivers/naveen/npdereference.ko

But still my backtrace doesn't say the correct symbol name :
#12 [800082c6ba60] _MODULE_INIT_TEXT_START_npdereference at 
80007a7e602c [npdereference]

The module name is "npdereference.ko" and the function where the crash is done 
looks like below. So I expect "null_deref_module_init"  to be present instead 
of "_MODULE_INIT_TEXT_START_npdereference" :

static int __init null_deref_module_init(void) {
// Pointer to an integer, initialized to NULL
int *null_pointer = NULL;
printk(KERN_INFO "Null dereference module loaded\n");

// Dereferencing the NULL pointer to trigger a crash
printk(KERN_INFO "Triggering null pointer dereference...\n");
*null_pointer = 1; // This line will cause a null pointer dereference

return 0; // This will never be reached
}


The "sym" command also doesn't point me to the source file :
crash> sym 80007a7e602c
80007a7e602c (m) _MODULE_INIT_TEXT_START_npdereference+44 [npdereference]
crash>

Is there a way to make this work correctly. The kernel module here is called 
"npdereference.ko" and is in-tree (part of kernel source repo).

Regards,
Naveen
--
Crash-utility mailing list -- devel@lists.crash-utility.osci.io
To unsubscribe send an email to devel-le...@lists.crash-utility.osci.io
https://${domain_name}/admin/lists/devel.lists.crash-utility.osci.io/
Contribution Guidelines: https://github.com/crash-utility/crash/wiki


[ovs-dev] [PATCH ovn v2] controller: Change dns resolution to async.

2024-04-04 Thread Naveen Yerramneni
Currently DNS resolution is a blocking call in OVN controller.
If DNS server is not reachable for any reason then, ovn-controller
thread blocks for longer time and other events are not processed.

Ex: If we try to run ovn-appctl commands during this then, ovn-controller
will not respond for a longer time.

Signed-off-by: Naveen Yerramneni 
Acked-by: Mark Michelson 
---
v2: Fix subject line
---
 controller/ovn-controller.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c
index c9ff5967a..b84f6dfd4 100644
--- a/controller/ovn-controller.c
+++ b/controller/ovn-controller.c
@@ -85,6 +85,7 @@
 #include "mirror.h"
 #include "mac_cache.h"
 #include "statctrl.h"
+#include "lib/dns-resolve.h"
 
 VLOG_DEFINE_THIS_MODULE(main);
 
@@ -5090,6 +5091,7 @@ main(int argc, char *argv[])
 mirror_init();
 vif_plug_provider_initialize();
 statctrl_init();
+dns_resolve_init(true);
 
 /* Connect to OVS OVSDB instance. */
 struct ovsdb_idl_loop ovs_idl_loop = OVSDB_IDL_LOOP_INITIALIZER(
@@ -6176,6 +6178,7 @@ loop_done:
 unixctl_server_destroy(unixctl);
 service_stop();
 ovsrcu_exit();
+dns_resolve_destroy();
 
 exit(retval);
 }
-- 
2.36.6

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[IMPROVEMENT] Using ServiceLoader to load ExtendedParseStrategy

2024-04-03 Thread Naveen Kumar
Hi All,

[*DISCLAIMER]* Please ignore if it's a duplicate request.

I was looking into supported grammars for flink-sql. We do have two dialects*
DEFAULT & HIVE.  *With this we do have  ExtendedParser
<https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/parse/ExtendedParser.java>
which helps to support some special commands. Currently ExtendedParser
<https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/parse/ExtendedParser.java>
can only support some predefined strategy
<https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/parse/ExtendedParser.java#L37>.
I was wondering if we can generalize the ExtendedParser
<https://github.com/apache/flink/blob/master/flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/parse/ExtendedParser.java>
using ServiceLoader which can eventually help new grammars with runtime
jars.

public Optional parse(String statement) {
for (ExtendedParseStrategy strategy : loadExtendedStrategies()) {
if (strategy.match(statement)) {
return Optional.of(strategy.convert(statement));
}
}
return Optional.empty();
}

private static List loadExtendedStrategies() {
// load ExtendedParserStrategy class with ServiceLoader
List parseStrategies = new ArrayList<>();
ServiceLoader extendedParseStrategies =
ServiceLoader.load(ExtendedParseStrategy.class);
for (ExtendedParseStrategy extendedParseStrategy :
extendedParseStrategies) {
parseStrategies.add(extendedParseStrategy);
}
return parseStrategies;
}


Please share your thoughts.

Thanks,
Naveen Kumar


[ovs-dev] [PATCH ovn] controller: change dns resolution to async.

2024-04-03 Thread Naveen Yerramneni
Currently DNS resolution is a blocking call in OVN controller.
If DNS server is not reachable for any reason then, ovn-controller
thread blocks for longer time and other events are not processed.

Ex: If we try to run ovn-appctl commands during this then, ovn-controller
will not respond for a longer time.

Signed-off-by: Naveen Yerramneni 
---
 controller/ovn-controller.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/controller/ovn-controller.c b/controller/ovn-controller.c
index c9ff5967a..b84f6dfd4 100644
--- a/controller/ovn-controller.c
+++ b/controller/ovn-controller.c
@@ -85,6 +85,7 @@
 #include "mirror.h"
 #include "mac_cache.h"
 #include "statctrl.h"
+#include "lib/dns-resolve.h"
 
 VLOG_DEFINE_THIS_MODULE(main);
 
@@ -5090,6 +5091,7 @@ main(int argc, char *argv[])
 mirror_init();
 vif_plug_provider_initialize();
 statctrl_init();
+dns_resolve_init(true);
 
 /* Connect to OVS OVSDB instance. */
 struct ovsdb_idl_loop ovs_idl_loop = OVSDB_IDL_LOOP_INITIALIZER(
@@ -6176,6 +6178,7 @@ loop_done:
 unixctl_server_destroy(unixctl);
 service_stop();
 ovsrcu_exit();
+dns_resolve_destroy();
 
 exit(retval);
 }
-- 
2.36.6

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[Crash-utility] How to get module symbols working ?

2024-04-02 Thread Naveen Chaudhary
I am analyzing the kdump in latest crash utility 8.0.4++.

I think I loaded the module symbols correctly :
crash> mod
 MODULE   NAME  TEXT_BASE   SIZE  OBJECT FILE
80007a7e2040  npdereference  80007a7e  12288  (not loaded)  
[CONFIG_KALLSYMS]
crash>
crash> mod -s npdereference 
/home/naveen/.repos/src/arm64/linux/drivers/naveen/npdereference.ko
 MODULE   NAME  TEXT_BASE   SIZE  OBJECT FILE
80007a7e2040  npdereference  80007a7e  12288  
/home/naveen/.repos/src/arm64/linux/drivers/naveen/npdereference.ko

But still my backtrace doesn't say the correct symbol name :
#12 [800082c6ba60] _MODULE_INIT_TEXT_START_npdereference at 
80007a7e602c [npdereference]

The "sym" command also doesn't point me to the source file :
crash> sym 80007a7e602c
80007a7e602c (m) _MODULE_INIT_TEXT_START_npdereference+44 [npdereference]
crash>

Is there a way to make this work correctly or at least make the "sym" command 
point to right source file. The kernel module here is called "npdereference.ko" 
and is in-tree (part of kernel source repo).

Regards,
Naveen

--
Crash-utility mailing list -- devel@lists.crash-utility.osci.io
To unsubscribe send an email to devel-le...@lists.crash-utility.osci.io
https://${domain_name}/admin/lists/devel.lists.crash-utility.osci.io/
Contribution Guidelines: https://github.com/crash-utility/crash/wiki


[Crash-utility] Re: [Crash-Utility][PATCH] symbols.c: skip non-exist module memory type

2024-04-02 Thread Naveen Chaudhary
Thanks Tao,

On a funny side, though I didn't understand this area of code much, but I 
ironically made the exact same fix to avoid problem for time being on my side, 
thinking there might be a different fix coming . Glad its now taken care. 
Thanks 

Regards,
Naveen



From: Tao Liu 
Sent: Tuesday, April 2, 2024 12:15 PM
To: devel@lists.crash-utility.osci.io 
Cc: Tao Liu ; Naveen Chaudhary 

Subject: [Crash-Utility][PATCH] symbols.c: skip non-exist module memory type

Not all mod_mem_type will be included for kernel modules. E.g. in the
following module case:

(gdb) p lm->symtable[0]
$1 = (struct syment *) 0x4dcbad0
(gdb) p lm->symtable[1]
$2 = (struct syment *) 0x4dcbb70
(gdb) p lm->symtable[2]
$3 = (struct syment *) 0x4dcbc10
(gdb) p lm->symtable[3]
$4 = (struct syment *) 0x0
(gdb) p lm->symtable[4]
$5 = (struct syment *) 0x4dcbcb0
(gdb) p lm->symtable[5]
$6 = (struct syment *) 0x4dcbd00
(gdb) p lm->symtable[6]
$7 = (struct syment *) 0x0
(gdb) p lm->symtable[7]
$8 = (struct syment *) 0x4dcbb48

mod_mem MOD_RO_AFTER_INIT(3) and MOD_INIT_RODATA(6) is not exist, which should
be skipped, otherwise a segfault will happen.

Fixes: 7750e61fdb2a ("Support module memory layout change on Linux 6.4")

Signed-off-by: Tao Liu 
Reported-by: Naveen Chaudhary 
---
 symbols.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/symbols.c b/symbols.c
index cbc9ed1..27e55c6 100644
--- a/symbols.c
+++ b/symbols.c
@@ -5580,7 +5580,7 @@ value_search_module_6_4(ulong value, ulong *offset)
 sp = lm->symtable[t];
 sp_end = lm->symend[t];

-   if (value < sp->value || value > sp_end->value)
+   if (!sp || value < sp->value || value > sp_end->value)
 continue;

 splast = NULL;
--
2.40.1

--
Crash-utility mailing list -- devel@lists.crash-utility.osci.io
To unsubscribe send an email to devel-le...@lists.crash-utility.osci.io
https://${domain_name}/admin/lists/devel.lists.crash-utility.osci.io/
Contribution Guidelines: https://github.com/crash-utility/crash/wiki


Re: Hive jdbc connector

2024-04-02 Thread Naveen Gangam
Not sure if you got a response. But should be safe to run with JRE8.

On Thu, Feb 1, 2024 at 2:45 AM stephen vijay  wrote:

> Hi sir,
>
> Which Java version does hive jdbc connector supports?
>
> Thanks,
> Vijay S.
>


Re: [ANNOUNCE] Apache Hive 4.0.0 Released

2024-04-02 Thread Naveen Gangam
Thank you for the tremendous amount of work put in by many many folks to
make this release happen, including projects hive is dependent upon like
tez.

Thank you to all the PMC members, committers and contributors for all the
work over the past 5+ years in shaping this release.

THANK YOU!!!

On Sun, Mar 31, 2024 at 8:54 AM Battula, Brahma Reddy 
wrote:

> Thank you for your hard work and dedication in releasing Apache Hive
> version 4.0.0.
>
>
>
> Congratulations to the entire team on this achievement. Keep up the great
> work!
>
>
>
> Does this consider as GA.?
>
>
>
> And Looks we need to update in the following location also.?
>
> https://hive.apache.org/general/downloads/
>
>
>
>
>
> *From: *Denys Kuzmenko 
> *Date: *Saturday, March 30, 2024 at 00:07
> *To: *user@hive.apache.org , d...@hive.apache.org <
> d...@hive.apache.org>
> *Subject: *[ANNOUNCE] Apache Hive 4.0.0 Released
>
> The Apache Hive team is proud to announce the release of Apache Hive
>
> version 4.0.0.
>
>
>
> The Apache Hive (TM) data warehouse software facilitates querying and
>
> managing large datasets residing in distributed storage. Built on top
>
> of Apache Hadoop (TM), it provides, among others:
>
>
>
> * Tools to enable easy data extract/transform/load (ETL)
>
>
>
> * A mechanism to impose structure on a variety of data formats
>
>
>
> * Access to files stored either directly in Apache HDFS (TM) or in other
>
>   data storage systems such as Apache HBase (TM)
>
>
>
> * Query execution via Apache Hadoop MapReduce, Apache Tez and Apache Spark 
> frameworks. (MapReduce is deprecated, and Spark has been removed so the text 
> needs to be modified depending on the release version)
>
>
>
> For Hive release details and downloads, please visit:
>
> https://hive.apache.org/downloads.html
>
>
>
> Hive 4.0.0 Release Notes are available here:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12343343=Text=12310843
>
>
>
> We would like to thank the many contributors who made this release
>
> possible.
>
>
>
> Regards,
>
>
>
> The Apache Hive Team
>
>


Re: [ANNOUNCE] Apache Hive 4.0.0 Released

2024-04-02 Thread Naveen Gangam
Thank you for the tremendous amount of work put in by many many folks to
make this release happen, including projects hive is dependent upon like
tez.

Thank you to all the PMC members, committers and contributors for all the
work over the past 5+ years in shaping this release.

THANK YOU!!!

On Sun, Mar 31, 2024 at 8:54 AM Battula, Brahma Reddy 
wrote:

> Thank you for your hard work and dedication in releasing Apache Hive
> version 4.0.0.
>
>
>
> Congratulations to the entire team on this achievement. Keep up the great
> work!
>
>
>
> Does this consider as GA.?
>
>
>
> And Looks we need to update in the following location also.?
>
> https://hive.apache.org/general/downloads/
>
>
>
>
>
> *From: *Denys Kuzmenko 
> *Date: *Saturday, March 30, 2024 at 00:07
> *To: *u...@hive.apache.org , dev@hive.apache.org <
> dev@hive.apache.org>
> *Subject: *[ANNOUNCE] Apache Hive 4.0.0 Released
>
> The Apache Hive team is proud to announce the release of Apache Hive
>
> version 4.0.0.
>
>
>
> The Apache Hive (TM) data warehouse software facilitates querying and
>
> managing large datasets residing in distributed storage. Built on top
>
> of Apache Hadoop (TM), it provides, among others:
>
>
>
> * Tools to enable easy data extract/transform/load (ETL)
>
>
>
> * A mechanism to impose structure on a variety of data formats
>
>
>
> * Access to files stored either directly in Apache HDFS (TM) or in other
>
>   data storage systems such as Apache HBase (TM)
>
>
>
> * Query execution via Apache Hadoop MapReduce, Apache Tez and Apache Spark 
> frameworks. (MapReduce is deprecated, and Spark has been removed so the text 
> needs to be modified depending on the release version)
>
>
>
> For Hive release details and downloads, please visit:
>
> https://hive.apache.org/downloads.html
>
>
>
> Hive 4.0.0 Release Notes are available here:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12343343=Text=12310843
>
>
>
> We would like to thank the many contributors who made this release
>
> possible.
>
>
>
> Regards,
>
>
>
> The Apache Hive Team
>
>


RE: [PATCH 19/26] netfs: New writeback implementation

2024-03-29 Thread Naveen Mamindlapalli
> -Original Message-
> From: David Howells 
> Sent: Thursday, March 28, 2024 10:04 PM
> To: Christian Brauner ; Jeff Layton 
> ;
> Gao Xiang ; Dominique Martinet
> 
> Cc: David Howells ; Matthew Wilcox
> ; Steve French ; Marc Dionne
> ; Paulo Alcantara ; Shyam
> Prasad N ; Tom Talpey ; Eric Van
> Hensbergen ; Ilya Dryomov ;
> ne...@lists.linux.dev; linux-cach...@redhat.com; 
> linux-...@lists.infradead.org;
> linux-c...@vger.kernel.org; linux-...@vger.kernel.org; ceph-
> de...@vger.kernel.org; v...@lists.linux.dev; linux-erofs@lists.ozlabs.org; 
> linux-
> fsde...@vger.kernel.org; linux...@kvack.org; net...@vger.kernel.org; linux-
> ker...@vger.kernel.org; Latchesar Ionkov ; Christian
> Schoenebeck 
> Subject: [PATCH 19/26] netfs: New writeback implementation
> 
> The current netfslib writeback implementation creates writeback requests of
> contiguous folio data and then separately tiles subrequests over the space
> twice, once for the server and once for the cache.  This creates a few
> issues:
> 
>  (1) Every time there's a discontiguity or a change between writing to only
>  one destination or writing to both, it must create a new request.
>  This makes it harder to do vectored writes.
> 
>  (2) The folios don't have the writeback mark removed until the end of the
>  request - and a request could be hundreds of megabytes.
> 
>  (3) In future, I want to support a larger cache granularity, which will
>  require aggregation of some folios that contain unmodified data (which
>  only need to go to the cache) and some which contain modifications
>  (which need to be uploaded and stored to the cache) - but, currently,
>  these are treated as discontiguous.
> 
> There's also a move to get everyone to use writeback_iter() to extract
> writable folios from the pagecache.  That said, currently writeback_iter()
> has some issues that make it less than ideal:
> 
>  (1) there's no way to cancel the iteration, even if you find a "temporary"
>  error that means the current folio and all subsequent folios are going
>  to fail;
> 
>  (2) there's no way to filter the folios being written back - something
>  that will impact Ceph with it's ordered snap system;
> 
>  (3) and if you get a folio you can't immediately deal with (say you need
>  to flush the preceding writes), you are left with a folio hanging in
>  the locked state for the duration, when really we should unlock it and
>  relock it later.
> 
> In this new implementation, I use writeback_iter() to pump folios,
> progressively creating two parallel, but separate streams and cleaning up
> the finished folios as the subrequests complete.  Either or both streams
> can contain gaps, and the subrequests in each stream can be of variable
> size, don't need to align with each other and don't need to align with the
> folios.
> 
> Indeed, subrequests can cross folio boundaries, may cover several folios or
> a folio may be spanned by multiple folios, e.g.:
> 
>  +---+---+-+-+---+--+
> Folios:  |   |   | | |   |  |
>  +---+---+-+-+---+--+
> 
>+--+--+ +++
> Upload:|  |  |.|||
>+--+--+ +++
> 
>  +--+--+--+--+--+
> Cache:   |  |  |  |  |  |
>  +--+--+--+--+--+
> 
> The progressive subrequest construction permits the algorithm to be
> preparing both the next upload to the server and the next write to the
> cache whilst the previous ones are already in progress.  Throttling can be
> applied to control the rate of production of subrequests - and, in any
> case, we probably want to write them to the server in ascending order,
> particularly if the file will be extended.
> 
> Content crypto can also be prepared at the same time as the subrequests and
> run asynchronously, with the prepped requests being stalled until the
> crypto catches up with them.  This might also be useful for transport
> crypto, but that happens at a lower layer, so probably would be harder to
> pull off.
> 
> The algorithm is split into three parts:
> 
>  (1) The issuer.  This walks through the data, packaging it up, encrypting
>  it and creating subrequests.  The part of this that generates
>  subrequests only deals with file positions and spans and so is usable
>  for DIO/unbuffered writes as well as buffered writes.
> 
>  (2) The collector. This asynchronously collects completed subrequests,
>  unlocks folios, frees crypto buffers and performs any retries.  This
>  runs in a work queue so that the issuer can return to the caller for
>  writeback (so that the VM can have its kswapd thread back) or async
>  writes.
> 
>  (3) The retryer.  This pauses the issuer, waits for all outstanding
>  subrequests to complete and then goes through the failed subrequests
>  to reissue them.  This may 

RE: [PATCH net-next v5 1/3] net: ethernet: ti: Add accessors for struct k3_cppi_desc_pool members

2024-03-28 Thread Naveen Mamindlapalli

> -Original Message-
> From: Julien Panis 
> Sent: Thursday, March 28, 2024 2:57 PM
> To: David S. Miller ; Eric Dumazet
> ; Jakub Kicinski ; Paolo Abeni
> ; Russell King ; Alexei Starovoitov
> ; Daniel Borkmann ; Jesper Dangaard
> Brouer ; John Fastabend ;
> Sumit Semwal ; Christian König
> ; Simon Horman ; Andrew
> Lunn ; Ratheesh Kannoth 
> Cc: net...@vger.kernel.org; linux-ker...@vger.kernel.org; 
> b...@vger.kernel.org;
> linux-me...@vger.kernel.org; dri-devel@lists.freedesktop.org; linaro-mm-
> s...@lists.linaro.org; Julien Panis 
> Subject: [PATCH net-next v5 1/3] net: ethernet: ti: Add accessors
> for struct k3_cppi_desc_pool members
> 
> This patch adds accessors for desc_size and cpumem members. They may be
> used, for instance, to compute a descriptor index.
> 
> Signed-off-by: Julien Panis 
> ---
>  drivers/net/ethernet/ti/k3-cppi-desc-pool.c | 12 
> drivers/net/ethernet/ti/k3-cppi-desc-pool.h |  2 ++
>  2 files changed, 14 insertions(+)
> 
> diff --git a/drivers/net/ethernet/ti/k3-cppi-desc-pool.c 
> b/drivers/net/ethernet/ti/k3-
> cppi-desc-pool.c
> index 05cc7aab1ec8..fe8203c05731 100644
> --- a/drivers/net/ethernet/ti/k3-cppi-desc-pool.c
> +++ b/drivers/net/ethernet/ti/k3-cppi-desc-pool.c
> @@ -132,5 +132,17 @@ size_t k3_cppi_desc_pool_avail(struct
> k3_cppi_desc_pool *pool)  }  EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_avail);
> 
> +size_t k3_cppi_desc_pool_desc_size(struct k3_cppi_desc_pool *pool) {
> + return pool->desc_size;

Don't you need to add NULL check on pool ptr since this function is exported?

> +}
> +EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_desc_size);
> +
> +void *k3_cppi_desc_pool_cpuaddr(struct k3_cppi_desc_pool *pool) {
> + return pool->cpumem;

Same here.

> +}
> +EXPORT_SYMBOL_GPL(k3_cppi_desc_pool_cpuaddr);
> +
>  MODULE_LICENSE("GPL");
>  MODULE_DESCRIPTION("TI K3 CPPI5 descriptors pool API"); diff --git
> a/drivers/net/ethernet/ti/k3-cppi-desc-pool.h 
> b/drivers/net/ethernet/ti/k3-cppi-desc-
> pool.h
> index a7e3fa5e7b62..149d5579a5e2 100644
> --- a/drivers/net/ethernet/ti/k3-cppi-desc-pool.h
> +++ b/drivers/net/ethernet/ti/k3-cppi-desc-pool.h
> @@ -26,5 +26,7 @@ k3_cppi_desc_pool_dma2virt(struct k3_cppi_desc_pool
> *pool, dma_addr_t dma);  void *k3_cppi_desc_pool_alloc(struct
> k3_cppi_desc_pool *pool);  void k3_cppi_desc_pool_free(struct
> k3_cppi_desc_pool *pool, void *addr);  size_t k3_cppi_desc_pool_avail(struct
> k3_cppi_desc_pool *pool);
> +size_t k3_cppi_desc_pool_desc_size(struct k3_cppi_desc_pool *pool);
> +void *k3_cppi_desc_pool_cpuaddr(struct k3_cppi_desc_pool *pool);
> 
>  #endif /* K3_CPPI_DESC_POOL_H_ */
> 
> --
> 2.37.3
> 



[meta-intel] [master][nanbield][kirkstone][PATCH] intel-microcode: upgrade 20231114 -> 20240312

2024-03-27 Thread Naveen Saini
Release notes:
https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files/releases/tag/microcode-20240312

Fixes CVEs:
CVE-2023-39368 
[https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00972.html]
CVE-2023-38575 
[https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00982.html]
CVE-2023-28746 
[https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00898.html]
CVE-2023-22655 
[https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00960.html]
CVE-2023-43490 
[https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-01045.html]

Signed-off-by: Naveen Saini 
---
 ...{intel-microcode_20231114.bb => intel-microcode_20240312.bb} | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
 rename recipes-core/microcode/{intel-microcode_20231114.bb => 
intel-microcode_20240312.bb} (97%)

diff --git a/recipes-core/microcode/intel-microcode_20231114.bb 
b/recipes-core/microcode/intel-microcode_20240312.bb
similarity index 97%
rename from recipes-core/microcode/intel-microcode_20231114.bb
rename to recipes-core/microcode/intel-microcode_20240312.bb
index 9eea6d63..00b18231 100644
--- a/recipes-core/microcode/intel-microcode_20231114.bb
+++ b/recipes-core/microcode/intel-microcode_20240312.bb
@@ -16,7 +16,7 @@ LIC_FILES_CHKSUM = 
"file://license;md5=d8405101ec6e90c1d84b082b0c40c721"
 SRC_URI = 
"git://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files.git;protocol=https;branch=main
 \
"
 
-SRCREV = "ece0d294a29a1375397941a4e6f2f7217910bc89"
+SRCREV = "41af34500598418150aa298bb04e7edacc547897"
 
 DEPENDS = "iucode-tool-native"
 S = "${WORKDIR}/git"
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8274): 
https://lists.yoctoproject.org/g/meta-intel/message/8274
Mute This Topic: https://lists.yoctoproject.org/mt/105190561/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[Bug target/112980] 64-bit powerpc ELFv2 does not allow nops to be generated before function global entry point

2024-03-25 Thread naveen at kernel dot org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112980

--- Comment #7 from Naveen N Rao  ---
I have been looking at an alternative approach to see if we can move the entire
function patching sequence out of line. However, the approach I am considering
is very specific to the linux kernel, and I don't see it applying for userspace
in a generic way. As such, I think there is value in addressing the current
limitation with -fpatchable-function-entry one way or another.

[ovs-dev] [PATCH OVN v5 4/4] tests: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-20 Thread Naveen Yerramneni
Added tests for DHCP Relay feature.

Signed-off-by: Naveen Yerramneni 
---
 tests/atlocal.in|   3 +
 tests/ovn-northd.at |  38 +++
 tests/ovn.at| 256 
 tests/system-ovn.at | 148 +
 4 files changed, 445 insertions(+)

diff --git a/tests/atlocal.in b/tests/atlocal.in
index 63d891b89..32d1c374e 100644
--- a/tests/atlocal.in
+++ b/tests/atlocal.in
@@ -187,6 +187,9 @@ fi
 # Set HAVE_DHCPD
 find_command dhcpd
 
+# Set HAVE_DHCLIENT
+find_command dhclient
+
 # Set HAVE_BFDD_BEACON
 find_command bfdd-beacon
 
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index 7893b0540..c042ad381 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -12150,6 +12150,44 @@ check_row_count nb:QoS 0
 AT_CLEANUP
 ])
 
+OVN_FOR_EACH_NORTHD_NO_HV([
+AT_SETUP([check DHCP RELAY])
+ovn_start NORTHD_TYPE
+
+check ovn-nbctl ls-add ls0
+check ovn-nbctl lsp-add ls0 ls0-port1
+check ovn-nbctl lsp-set-addresses ls0-port1 02:00:00:00:00:10
+check ovn-nbctl lr-add lr0
+check ovn-nbctl lrp-add lr0 lrp1 02:00:00:00:00:01 192.168.1.1/24
+check ovn-nbctl lsp-add ls0 lrp1-attachment
+check ovn-nbctl lsp-set-type lrp1-attachment router
+check ovn-nbctl lsp-set-addresses lrp1-attachment 00:00:00:00:ff:02
+check ovn-nbctl lsp-set-options lrp1-attachment router-port=lrp1
+check ovn-nbctl lrp-add lr0 lrp-ext 02:00:00:00:00:02 192.168.2.1/24
+
+dhcp_relay=$(ovn-nbctl create DHCP_Relay servers=172.16.1.1)
+check ovn-nbctl set Logical_Router_port lrp1 dhcp_relay=$dhcp_relay
+check ovn-nbctl set Logical_Switch ls0 
other_config:dhcp_relay_port=lrp1-attachment
+
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl lflow-list > lflows
+AT_CAPTURE_FILE([lflows])
+
+AT_CHECK([grep -e "DHCP_RELAY_" lflows | sed 's/table=../table=??/'], [0], [dnl
+  table=??(lr_in_ip_input ), priority=110  , match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && ip.frag == 0 && udp.src == 
68 && udp.dst == 67), action=(reg9[[7]] = dhcp_relay_req_chk(192.168.1.1, 
172.16.1.1);next; /* DHCP_RELAY_REQ */)
+  table=??(lr_in_ip_input ), priority=110  , match=(ip4.src == 172.16.1.1 
&& ip4.dst == 192.168.1.1 && ip.frag == 0 && udp.src == 67 && udp.dst == 67), 
action=(next;/* DHCP_RELAY_RESP */)
+  table=??(lr_in_dhcp_relay_req), priority=100  , match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[[7]]), 
action=(ip4.src=192.168.1.1;ip4.dst=172.16.1.1;udp.src=67;next; /* 
DHCP_RELAY_REQ */)
+  table=??(lr_in_dhcp_relay_req), priority=1, match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[[7]] == 0), action=(drop; /* DHCP_RELAY_REQ */)
+  table=??(lr_in_dhcp_relay_resp_chk), priority=100  , match=(ip4.src == 
172.16.1.1 && ip4.dst == 192.168.1.1 && udp.src == 67 && udp.dst == 67), 
action=(reg2 = ip4.dst;reg9[[8]] = dhcp_relay_resp_chk(192.168.1.1, 
172.16.1.1);next;/* DHCP_RELAY_RESP */)
+  table=??(lr_in_dhcp_relay_resp), priority=100  , match=(ip4.src == 
172.16.1.1 && reg2 == 192.168.1.1 && udp.src == 67 && udp.dst == 67 && 
reg9[[8]]), action=(ip4.src=192.168.1.1;udp.dst=68;outport="lrp1";output; /* 
DHCP_RELAY_RESP */)
+  table=??(lr_in_dhcp_relay_resp), priority=1, match=(ip4.src == 
172.16.1.1 && reg2 == 192.168.1.1 && udp.src == 67 && udp.dst == 67 && 
reg9[[8]] == 0), action=(drop; /* DHCP_RELAY_RESP */)
+  table=??(ls_in_l2_lkup  ), priority=100  , match=(inport == "ls0-port1" 
&& eth.src == 02:00:00:00:00:10 && ip4.src == 0.0.0.0 && ip4.dst == 
255.255.255.255 && udp.src == 68 && udp.dst == 67), 
action=(eth.dst=02:00:00:00:00:01;outport="lrp1-attachment";next;/* 
DHCP_RELAY_REQ */)
+])
+
+AT_CLEANUP
+])
+
 AT_SETUP([NB_Global and SB_Global incremental processing])
 
 ovn_start
diff --git a/tests/ovn.at b/tests/ovn.at
index 32e3d8b13..c2570167c 100644
--- a/tests/ovn.at
+++ b/tests/ovn.at
@@ -1672,6 +1672,40 @@ reg1[[0]] = put_dhcp_opts(offerip=1.2.3.4, 
domain_name=1.2.3.4);
 reg1[[0]] = put_dhcp_opts(offerip=1.2.3.4, domain_search_list=1.2.3.4);
 DHCPv4 option domain_search_list requires string value.
 
+#dhcp_relay_req_chk
+reg9[[7]] = dhcp_relay_req_chk(192.168.1.1, 172.16.1.1);
+encodes as 
controller(userdata=00.00.00.1c.00.00.00.00.80.01.08.08.00.00.00.07.c0.a8.01.01.ac.10.01.01,pause)
+
+reg9[[7]] = dhcp_relay_req_chk(192.168.1.1,172.16.1.1);
+formats as reg9[[7]] = dhcp_relay_req_chk(192.168.1.1, 172.16.1.1);
+encodes as 
controller(userdata=00.00.00.1c.00.00.00.00.80.01.08.08.00.00.00.07.c0.a8.01.01.ac.10.01.01,pause)
+
+reg9[[7..

[ovs-dev] [PATCH OVN v5 3/4] northd: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-20 Thread Naveen Yerramneni
NB SCHEMA CHANGES
-
  1. New DHCP_Relay table
  "DHCP_Relay": {
"columns": {
"name": {"type": "string"},
"servers": {"type": {"key": "string",
   "min": 0,
   "max": 1}},
"external_ids": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}},
"options": {"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"isRoot": true},
  2. New column to Logical_Router_Port table
  "dhcp_relay": {"type": {"key": {"type": "uuid",
"refTable": "DHCP_Relay",
"refType": "strong"},
"min": 0,
"max": 1}},

NEW PIPELINE STAGES
---
Following stage is added for DHCP relay feature.
Some of the flows are fitted into the existing pipeline tages.
  1. lr_in_dhcp_relay_req
   - This stage process the DHCP request packets coming from DHCP clients.
   - DHCP request packets for which dhcp_relay_req_chk action
 (which gets applied in ip input stage) is successful are forwarded to 
DHCP server.
   - DHCP request packets for which dhcp_relay_req_chk action is 
unsuccessful gets dropped.
  2. lr_in_dhcp_relay_resp_chk
   - This stage applied the dhcp_relay_resp_chk action for  DHCP response 
packets coming
 from the DHCP server.
  3. lr_in_dhcp_relay_resp
   - DHCP response packets for which dhcp_relay_resp_chk is sucessful are 
forwarded
 to the DHCP clients.
   - DHCP response packets for which dhcp_relay_resp_chk is unsucessful 
gets dropped.

REGISTRY USAGE
---
  - reg9[7] : To store the result of dhcp_relay_req_chk action.
  - reg9[8] : To store the result of dhcp_relay_resp_chk action.
  - reg2 : To store the original dest ip for DHCP response packets.
   This is required to properly match the packets in
   lr_in_dhcp_relay_resp stage since dhcp_relay_resp_chk action
   changes the dest ip.

FLOWS
-

Following are the flows added when DHCP Relay is configured on one overlay 
subnet,
one additonal flow is added in ls_in_l2_lkup table for each VM part of the 
subnet.

  1. table=27(ls_in_l2_lkup  ), priority=100  , match=(inport ==  
&& eth.src ==  && ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && 
udp.src == 68 && udp.dst == 67),
 action=(eth.dst=;outport=;next;/* DHCP_RELAY_REQ */)
  2. table=3 (lr_in_ip_input ), priority=110  , match=(inport ==  && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && ip.frag == 0 && udp.src == 
68 && udp.dst == 67),
 action=(reg9[7] = dhcp_relay_req_chk(, );next; /* 
DHCP_RELAY_REQ */)
  3. table=3 (lr_in_ip_input ), priority=110  , match=(ip4.src == 
 && ip4.dst ==  && udp.src == 67 && udp.dst == 67), 
action=(next;/* DHCP_RELAY_RESP */)
  4. table=4 (lr_in_dhcp_relay_req), priority=100  , match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[7]),
 action=(ip4.src=;ip4.dst=;udp.src=67;next; /* 
DHCP_RELAY_REQ */)
  5. table=4 (lr_in_dhcp_relay_req), priority=1, match=(inport ==  && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[7] == 0),
 action=(drop; /* DHCP_RELAY_REQ */)
  6. table=18(lr_in_dhcp_relay_resp_chk), priority=100  , match=(ip4.src == 
 && ip4.dst ==  && ip.frag == 0 && udp.src == 67 && udp.dst 
== 67),
 action=(reg2 = ip4.dst;reg9[8] = dhcp_relay_resp_chk(, 
);next;/* DHCP_RELAY_RESP */)
  7. table=19(lr_in_dhcp_relay_resp), priority=100  , match=(ip4.src == 
 && reg2 ==  && udp.src == 67 && udp.dst == 67 && reg9[8]),
 action=(ip4.src=;udp.dst=68;outport=;output; /* DHCP_RELAY_RESP 
*/)
  8. table=19(lr_in_dhcp_relay_resp), priority=1, match=(ip4.src == 
 && reg2 ==  && udp.src == 67 && udp.dst == 67 && reg9[8] 
== 0), action=(drop; /* DHCP_RELAY_RESP */)

Commands to enable the feature
--
  ovn-nbctl create DHCP_Relay name= servers=
  ovn-nbctl set Logical_Router_port  dhcp_relay=
  ovn-n

[ovs-dev] [PATCH OVN v5 2/4] controller: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-20 Thread Naveen Yerramneni
Added changes in pinctrl to process DHCP Relay opcodes:
  - ACTION_OPCODE_DHCP_RELAY_REQ_CHK: For request packets
  - ACTION_OPCODE_DHCP_RELAY_RESP_CHK: For response packet

Signed-off-by: Naveen Yerramneni 
---
 controller/pinctrl.c | 596 ++-
 lib/ovn-l7.h |   2 +
 2 files changed, 529 insertions(+), 69 deletions(-)

diff --git a/controller/pinctrl.c b/controller/pinctrl.c
index 2d3595cd2..11a5cac62 100644
--- a/controller/pinctrl.c
+++ b/controller/pinctrl.c
@@ -2017,6 +2017,514 @@ is_dhcp_flags_broadcast(ovs_be16 flags)
 return flags & htons(DHCP_BROADCAST_FLAG);
 }
 
+static const char *dhcp_msg_str[] = {
+[0] = "INVALID",
+[DHCP_MSG_DISCOVER] = "DISCOVER",
+[DHCP_MSG_OFFER] = "OFFER",
+[DHCP_MSG_REQUEST] = "REQUEST",
+[OVN_DHCP_MSG_DECLINE] = "DECLINE",
+[DHCP_MSG_ACK] = "ACK",
+[DHCP_MSG_NAK] = "NAK",
+[OVN_DHCP_MSG_RELEASE] = "RELEASE",
+[OVN_DHCP_MSG_INFORM] = "INFORM"
+};
+
+static bool
+dhcp_relay_is_msg_type_supported(uint8_t msg_type)
+{
+return (msg_type >= DHCP_MSG_DISCOVER && msg_type <= OVN_DHCP_MSG_RELEASE);
+}
+
+static const char *dhcp_msg_str_get(uint8_t msg_type)
+{
+if (!dhcp_relay_is_msg_type_supported(msg_type)) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "Unknown DHCP msg type: %u", msg_type);
+return "UNKNOWN";
+}
+return dhcp_msg_str[msg_type];
+}
+
+static const struct dhcp_header *
+dhcp_get_hdr_from_pkt(struct dp_packet *pkt_in, const char **in_dhcp_pptr,
+  const char *end)
+{
+/* Validate the DHCP request packet.
+ * Format of the DHCP packet is
+ * ---
+ *| UDP HEADER | DHCP HEADER | 4 Byte DHCP Cookie | DHCP OPTIONS(var len) |
+ * ---
+ */
+
+*in_dhcp_pptr = dp_packet_get_udp_payload(pkt_in);
+if (*in_dhcp_pptr == NULL) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Invalid or incomplete DHCP packet received");
+return NULL;
+}
+
+const struct dhcp_header *dhcp_hdr
+= (const struct dhcp_header *) *in_dhcp_pptr;
+(*in_dhcp_pptr) += sizeof *dhcp_hdr;
+if (*in_dhcp_pptr > end) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Invalid or incomplete DHCP packet received, "
+ "bad data length");
+return NULL;
+}
+
+if (dhcp_hdr->htype != 0x1) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Packet is recieved with "
+"unsupported hardware type");
+return NULL;
+}
+
+if (dhcp_hdr->hlen != 0x6) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Packet is recieved with "
+"unsupported hardware length");
+return NULL;
+}
+
+/* DHCP options follow the DHCP header. The first 4 bytes of the DHCP
+ * options is the DHCP magic cookie followed by the actual DHCP options.
+ */
+ovs_be32 magic_cookie = htonl(DHCP_MAGIC_COOKIE);
+if ((*in_dhcp_pptr) + sizeof magic_cookie > end ||
+get_unaligned_be32((const void *) (*in_dhcp_pptr)) != magic_cookie) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Magic cookie not present in the DHCP packet");
+return NULL;
+}
+
+(*in_dhcp_pptr) += sizeof magic_cookie;
+
+return dhcp_hdr;
+}
+
+static void
+dhcp_parse_options(const char **in_dhcp_pptr, const char *end,
+  const uint8_t **dhcp_msg_type_pptr, ovs_be32 *request_ip_ptr,
+  bool *ipxe_req_ptr, ovs_be32 *server_id_ptr,
+  ovs_be32 *netmask_ptr, ovs_be32 *router_ip_ptr)
+{
+while ((*in_dhcp_pptr) < end) {
+const struct dhcp_opt_header *in_dhcp_opt =
+(const struct dhcp_opt_header *) *in_dhcp_pptr;
+if (in_dhcp_opt->code == DHCP_OPT_END) {
+break;
+}
+if (in_dhcp_opt->code == DHCP_OPT_PAD) {
+(*in_dhcp_pptr) += 1;
+continue;
+}
+(*in_dhcp_pptr) += sizeof *in_dhcp_opt;
+if ((*in_dhcp_pptr) > end) {
+break;
+}
+(*in_dhcp_pptr) += in_dhcp_opt->len;
+if ((*in_dhcp_pptr) > end) {
+break;
+}
+
+switch (in_dhcp_opt->code) {
+case DHCP_OPT_MSG_TYPE:
+if (dhcp_msg_type_pptr && in_dhcp_opt->len == 1) {
+*dhcp_msg_type_pptr = DHCP_OPT_PAYLOAD(in_dhc

[ovs-dev] [PATCH OVN v5 1/4] actions: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-20 Thread Naveen Yerramneni
NEW OVN ACTIONS
---
  1. dhcp_relay_req_chk(, )
   - This action executes on the source node on which the DHCP request 
originated.
   - This action relays the DHCP request coming from client to the server.
 Relay-ip is used to update GIADDR in the DHCP header.
  2. dhcp_relay_resp_chk(, )
   - This action executes on the first node (RC node) which processes
 the DHCP response from the server.
   - This action updates  the destination MAC and destination IP so that 
the response
 can be forwarded to the appropriate node from which request was 
originated.
   - Relay-ip, server-ip are used to validate GIADDR and SERVER ID in the 
DHCP payload.

Signed-off-by: Naveen Yerramneni 
---
 include/ovn/actions.h |  27 
 lib/actions.c | 149 ++
 utilities/ovn-trace.c |  67 +++
 3 files changed, 243 insertions(+)

diff --git a/include/ovn/actions.h b/include/ovn/actions.h
index dcacbb1ff..8d0c6b9fa 100644
--- a/include/ovn/actions.h
+++ b/include/ovn/actions.h
@@ -96,6 +96,8 @@ struct collector_set_ids;
 OVNACT(LOOKUP_ND_IP,  ovnact_lookup_mac_bind_ip) \
 OVNACT(PUT_DHCPV4_OPTS,   ovnact_put_opts)\
 OVNACT(PUT_DHCPV6_OPTS,   ovnact_put_opts)\
+OVNACT(DHCPV4_RELAY_REQ_CHK,  ovnact_dhcp_relay)  \
+OVNACT(DHCPV4_RELAY_RESP_CHK, ovnact_dhcp_relay)  \
 OVNACT(SET_QUEUE, ovnact_set_queue)   \
 OVNACT(DNS_LOOKUP,ovnact_result)  \
 OVNACT(LOG,   ovnact_log) \
@@ -396,6 +398,15 @@ struct ovnact_put_opts {
 size_t n_options;
 };
 
+/* OVNACT_DHCP_RELAY. */
+struct ovnact_dhcp_relay {
+struct ovnact ovnact;
+int family;
+struct expr_field dst;  /* 1-bit destination field. */
+ovs_be32 relay_ipv4;
+ovs_be32 server_ipv4;
+};
+
 /* Valid arguments to SET_QUEUE action.
  *
  * QDISC_MIN_QUEUE_ID is the default queue, so user-defined queues should
@@ -772,6 +783,22 @@ enum action_opcode {
 
 /* multicast group split buffer action. */
 ACTION_OPCODE_MG_SPLIT_BUF,
+
+/* "dhcp_relay_req_chk(relay_ip, server_ip)".
+ *
+ * Arguments follow the action_header, in this format:
+ *   - The 32-bit DHCP relay IP.
+ *   - The 32-bit DHCP server IP.
+ */
+ACTION_OPCODE_DHCP_RELAY_REQ_CHK,
+
+/* "dhcp_relay_resp_chk(relay_ip, server_ip)".
+ *
+ * Arguments follow the action_header, in this format:
+ *   - The 32-bit DHCP relay IP.
+ *   - The 32-bit DHCP server IP.
+ */
+ACTION_OPCODE_DHCP_RELAY_RESP_CHK,
 };
 
 /* Header. */
diff --git a/lib/actions.c b/lib/actions.c
index 71615fc53..d4f4ec2d0 100644
--- a/lib/actions.c
+++ b/lib/actions.c
@@ -2706,6 +2706,149 @@ ovnact_controller_event_free(struct 
ovnact_controller_event *event)
 free_gen_options(event->options, event->n_options);
 }
 
+static void
+format_DHCPV4_RELAY_REQ_CHK(const struct ovnact_dhcp_relay *dhcp_relay,
+struct ds *s)
+{
+expr_field_format(_relay->dst, s);
+ds_put_format(s, " = dhcp_relay_req_chk("IP_FMT", "IP_FMT");",
+  IP_ARGS(dhcp_relay->relay_ipv4),
+  IP_ARGS(dhcp_relay->server_ipv4));
+}
+
+static void
+parse_dhcp_relay_req_chk(struct action_context *ctx,
+   const struct expr_field *dst,
+   struct ovnact_dhcp_relay *dhcp_relay)
+{
+/* Skip dhcp_relay_req_chk( */
+lexer_force_match(ctx->lexer, LEX_T_LPAREN);
+
+/* Validate that the destination is a 1-bit, modifiable field. */
+char *error = expr_type_check(dst, 1, true, ctx->scope);
+if (error) {
+lexer_error(ctx->lexer, "%s", error);
+free(error);
+return;
+}
+dhcp_relay->dst = *dst;
+
+/* Parse relay ip and server ip. */
+if (ctx->lexer->token.format == LEX_F_IPV4) {
+dhcp_relay->family = AF_INET;
+dhcp_relay->relay_ipv4 = ctx->lexer->token.value.ipv4;
+lexer_get(ctx->lexer);
+lexer_match(ctx->lexer, LEX_T_COMMA);
+if (ctx->lexer->token.format == LEX_F_IPV4) {
+dhcp_relay->family = AF_INET;
+dhcp_relay->server_ipv4 = ctx->lexer->token.value.ipv4;
+lexer_get(ctx->lexer);
+} else {
+lexer_syntax_error(ctx->lexer, "expecting IPv4 dhcp server ip");
+return;
+}
+} else {
+  lexer_syntax_error(ctx->lexer, "expecting IPv4 dhcp relay "
+  "and server ips");
+  return;
+}
+lexer_force_match(ctx->lexer, LEX_T_RPAREN);
+}
+
+static void
+encode_DHCPV4_RELAY_REQ_CHK(const struct ovnact_dhcp_relay *dhcp_relay,
+const struct ovnact_encode_pa

[ovs-dev] [PATCH OVN v5 0/4] DHCP Relay Agent support for overlay subnets.

2024-03-20 Thread Naveen Yerramneni
p-add ls0 vif0
 ovn-nbctl lsp-set-addresses vif0  #Only MAC address has to be 
specified when logical ports are created.
 ovn-nbctl lsp-add ls0 lrp1-attachment
 ovn-nbctl lsp-set-type lrp1-attachment router
 ovn-nbctl lsp-set-addresses lrp1-attachment
 ovn-nbctl lsp-set-options lrp1-attachment router-port=lrp1
 ovn-nbctl lr-add lr0
 ovn-nbctl lrp-add lr0 lrp1   #GATEWAY IP is set in 
GIADDR field when relaying the DHCP requests to server.
 ovn-nbctl lrp-add lr0 lrp-ext  
 ovn-nbctl ls-add ls-ext
 ovn-nbctl lsp-add ls-ext lrp-ext-attachment
 ovn-nbctl lsp-set-type lrp-ext-attachment router
 ovn-nbctl lsp-set-addresses lrp-ext-attachment
 ovn-nbctl lsp-set-options lrp-ext-attachment router-port=lrp-ext
 ovn-nbctl lsp-add ls-ext ln_port
 ovn-nbctl lsp-set-addresses ln_port unknown
 ovn-nbctl lsp-set-type ln_port localnet
 ovn-nbctl lsp-set-options ln_port network_name=physnet1
 # Enable DHCP Relay feature
 ovn-nbctl create DHCP_Relay name=dhcp_relay_test servers=
 ovn-nbctl set Logical_Router_port lrp1 dhcp_relay=
 ovn-nbctl set Logical_Switch ls0 
other_config:dhcp_relay_port=lrp1-attachment

Limitations:

  - All OVN features that needs IP address to be configured on logical port 
(like proxy arp, etc) will not be supported for overlay subnets on which DHCP 
relay is enabled.

References:
--
  - rfc1541, rfc1542, rfc2131

V1:
  - First patch.

V2:
  - Addressed review comments from Numan.

V3:
  - Split the patch into series.
  - Addressed review comments from Numan.
  - Updated the match condition for DHCP Relay flows.

V4:
  - Fix sparse errors.
  - Reorder patch series.

V5:
  - Fix test failures.

Naveen Yerramneni (4):
  actions: DHCP Relay Agent support for overlay IPv4 subnets.
  controller: DHCP Relay Agent support for overlay IPv4 subnets.
  northd: DHCP Relay Agent support for overlay IPv4 subnets.
  tests: DHCP Relay Agent support for overlay IPv4 subnets.

 controller/pinctrl.c  | 596 +-
 include/ovn/actions.h |  27 ++
 lib/actions.c | 149 +++
 lib/ovn-l7.h  |   2 +
 northd/northd.c   | 271 ++-
 northd/northd.h   |  41 +--
 ovn-nb.ovsschema  |  19 +-
 ovn-nb.xml|  39 +++
 tests/atlocal.in  |   3 +
 tests/ovn-northd.at   |  38 +++
 tests/ovn.at  | 258 +-
 tests/system-ovn.at   | 148 +++
 utilities/ovn-trace.c |  67 +
 13 files changed, 1566 insertions(+), 92 deletions(-)

-- 
2.36.6

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH OVN v4 4/4] tests: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-19 Thread Naveen Yerramneni
Added tests for DHCP Relay feature.

Signed-off-by: Naveen Yerramneni 
---
 tests/atlocal.in|   3 +
 tests/ovn-northd.at |  38 +++
 tests/ovn.at| 258 +++-
 tests/system-ovn.at | 148 +
 4 files changed, 446 insertions(+), 1 deletion(-)

diff --git a/tests/atlocal.in b/tests/atlocal.in
index 63d891b89..32d1c374e 100644
--- a/tests/atlocal.in
+++ b/tests/atlocal.in
@@ -187,6 +187,9 @@ fi
 # Set HAVE_DHCPD
 find_command dhcpd
 
+# Set HAVE_DHCLIENT
+find_command dhclient
+
 # Set HAVE_BFDD_BEACON
 find_command bfdd-beacon
 
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index 7893b0540..c042ad381 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -12150,6 +12150,44 @@ check_row_count nb:QoS 0
 AT_CLEANUP
 ])
 
+OVN_FOR_EACH_NORTHD_NO_HV([
+AT_SETUP([check DHCP RELAY])
+ovn_start NORTHD_TYPE
+
+check ovn-nbctl ls-add ls0
+check ovn-nbctl lsp-add ls0 ls0-port1
+check ovn-nbctl lsp-set-addresses ls0-port1 02:00:00:00:00:10
+check ovn-nbctl lr-add lr0
+check ovn-nbctl lrp-add lr0 lrp1 02:00:00:00:00:01 192.168.1.1/24
+check ovn-nbctl lsp-add ls0 lrp1-attachment
+check ovn-nbctl lsp-set-type lrp1-attachment router
+check ovn-nbctl lsp-set-addresses lrp1-attachment 00:00:00:00:ff:02
+check ovn-nbctl lsp-set-options lrp1-attachment router-port=lrp1
+check ovn-nbctl lrp-add lr0 lrp-ext 02:00:00:00:00:02 192.168.2.1/24
+
+dhcp_relay=$(ovn-nbctl create DHCP_Relay servers=172.16.1.1)
+check ovn-nbctl set Logical_Router_port lrp1 dhcp_relay=$dhcp_relay
+check ovn-nbctl set Logical_Switch ls0 
other_config:dhcp_relay_port=lrp1-attachment
+
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl lflow-list > lflows
+AT_CAPTURE_FILE([lflows])
+
+AT_CHECK([grep -e "DHCP_RELAY_" lflows | sed 's/table=../table=??/'], [0], [dnl
+  table=??(lr_in_ip_input ), priority=110  , match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && ip.frag == 0 && udp.src == 
68 && udp.dst == 67), action=(reg9[[7]] = dhcp_relay_req_chk(192.168.1.1, 
172.16.1.1);next; /* DHCP_RELAY_REQ */)
+  table=??(lr_in_ip_input ), priority=110  , match=(ip4.src == 172.16.1.1 
&& ip4.dst == 192.168.1.1 && ip.frag == 0 && udp.src == 67 && udp.dst == 67), 
action=(next;/* DHCP_RELAY_RESP */)
+  table=??(lr_in_dhcp_relay_req), priority=100  , match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[[7]]), 
action=(ip4.src=192.168.1.1;ip4.dst=172.16.1.1;udp.src=67;next; /* 
DHCP_RELAY_REQ */)
+  table=??(lr_in_dhcp_relay_req), priority=1, match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[[7]] == 0), action=(drop; /* DHCP_RELAY_REQ */)
+  table=??(lr_in_dhcp_relay_resp_chk), priority=100  , match=(ip4.src == 
172.16.1.1 && ip4.dst == 192.168.1.1 && udp.src == 67 && udp.dst == 67), 
action=(reg2 = ip4.dst;reg9[[8]] = dhcp_relay_resp_chk(192.168.1.1, 
172.16.1.1);next;/* DHCP_RELAY_RESP */)
+  table=??(lr_in_dhcp_relay_resp), priority=100  , match=(ip4.src == 
172.16.1.1 && reg2 == 192.168.1.1 && udp.src == 67 && udp.dst == 67 && 
reg9[[8]]), action=(ip4.src=192.168.1.1;udp.dst=68;outport="lrp1";output; /* 
DHCP_RELAY_RESP */)
+  table=??(lr_in_dhcp_relay_resp), priority=1, match=(ip4.src == 
172.16.1.1 && reg2 == 192.168.1.1 && udp.src == 67 && udp.dst == 67 && 
reg9[[8]] == 0), action=(drop; /* DHCP_RELAY_RESP */)
+  table=??(ls_in_l2_lkup  ), priority=100  , match=(inport == "ls0-port1" 
&& eth.src == 02:00:00:00:00:10 && ip4.src == 0.0.0.0 && ip4.dst == 
255.255.255.255 && udp.src == 68 && udp.dst == 67), 
action=(eth.dst=02:00:00:00:00:01;outport="lrp1-attachment";next;/* 
DHCP_RELAY_REQ */)
+])
+
+AT_CLEANUP
+])
+
 AT_SETUP([NB_Global and SB_Global incremental processing])
 
 ovn_start
diff --git a/tests/ovn.at b/tests/ovn.at
index 4d0c7ad53..c2570167c 100644
--- a/tests/ovn.at
+++ b/tests/ovn.at
@@ -1672,6 +1672,40 @@ reg1[[0]] = put_dhcp_opts(offerip=1.2.3.4, 
domain_name=1.2.3.4);
 reg1[[0]] = put_dhcp_opts(offerip=1.2.3.4, domain_search_list=1.2.3.4);
 DHCPv4 option domain_search_list requires string value.
 
+#dhcp_relay_req_chk
+reg9[[7]] = dhcp_relay_req_chk(192.168.1.1, 172.16.1.1);
+encodes as 
controller(userdata=00.00.00.1c.00.00.00.00.80.01.08.08.00.00.00.07.c0.a8.01.01.ac.10.01.01,pause)
+
+reg9[[7]] = dhcp_relay_req_chk(192.168.1.1,172.16.1.1);
+formats as reg9[[7]] = dhcp_relay_req_chk(192.168.1.1, 172.16.1.1);
+encodes as 
controller(userdata=00.00.00.1c.00.00.00.00.80.01.08.08.00.00.00.07.c0.a8.01.01.ac.10.01.01,pause)
+
+

[ovs-dev] [PATCH OVN v4 3/4] northd: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-19 Thread Naveen Yerramneni
NB SCHEMA CHANGES
-
  1. New DHCP_Relay table
  "DHCP_Relay": {
"columns": {
"name": {"type": "string"},
"servers": {"type": {"key": "string",
   "min": 0,
   "max": 1}},
"external_ids": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}},
"options": {"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"isRoot": true},
  2. New column to Logical_Router_Port table
  "dhcp_relay": {"type": {"key": {"type": "uuid",
"refTable": "DHCP_Relay",
"refType": "strong"},
"min": 0,
"max": 1}},

NEW PIPELINE STAGES
---
Following stage is added for DHCP relay feature.
Some of the flows are fitted into the existing pipeline tages.
  1. lr_in_dhcp_relay_req
   - This stage process the DHCP request packets coming from DHCP clients.
   - DHCP request packets for which dhcp_relay_req_chk action
 (which gets applied in ip input stage) is successful are forwarded to 
DHCP server.
   - DHCP request packets for which dhcp_relay_req_chk action is 
unsuccessful gets dropped.
  2. lr_in_dhcp_relay_resp_chk
   - This stage applied the dhcp_relay_resp_chk action for  DHCP response 
packets coming
 from the DHCP server.
  3. lr_in_dhcp_relay_resp
   - DHCP response packets for which dhcp_relay_resp_chk is sucessful are 
forwarded
 to the DHCP clients.
   - DHCP response packets for which dhcp_relay_resp_chk is unsucessful 
gets dropped.

REGISTRY USAGE
---
  - reg9[7] : To store the result of dhcp_relay_req_chk action.
  - reg9[8] : To store the result of dhcp_relay_resp_chk action.
  - reg2 : To store the original dest ip for DHCP response packets.
   This is required to properly match the packets in
   lr_in_dhcp_relay_resp stage since dhcp_relay_resp_chk action
   changes the dest ip.

FLOWS
-

Following are the flows added when DHCP Relay is configured on one overlay 
subnet,
one additonal flow is added in ls_in_l2_lkup table for each VM part of the 
subnet.

  1. table=27(ls_in_l2_lkup  ), priority=100  , match=(inport ==  
&& eth.src ==  && ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && 
udp.src == 68 && udp.dst == 67),
 action=(eth.dst=;outport=;next;/* DHCP_RELAY_REQ */)
  2. table=3 (lr_in_ip_input ), priority=110  , match=(inport ==  && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && ip.frag == 0 && udp.src == 
68 && udp.dst == 67),
 action=(reg9[7] = dhcp_relay_req_chk(, );next; /* 
DHCP_RELAY_REQ */)
  3. table=3 (lr_in_ip_input ), priority=110  , match=(ip4.src == 
 && ip4.dst ==  && udp.src == 67 && udp.dst == 67), 
action=(next;/* DHCP_RELAY_RESP */)
  4. table=4 (lr_in_dhcp_relay_req), priority=100  , match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[7]),
 action=(ip4.src=;ip4.dst=;udp.src=67;next; /* 
DHCP_RELAY_REQ */)
  5. table=4 (lr_in_dhcp_relay_req), priority=1, match=(inport ==  && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[7] == 0),
 action=(drop; /* DHCP_RELAY_REQ */)
  6. table=18(lr_in_dhcp_relay_resp_chk), priority=100  , match=(ip4.src == 
 && ip4.dst ==  && ip.frag == 0 && udp.src == 67 && udp.dst 
== 67),
 action=(reg2 = ip4.dst;reg9[8] = dhcp_relay_resp_chk(, 
);next;/* DHCP_RELAY_RESP */)
  7. table=19(lr_in_dhcp_relay_resp), priority=100  , match=(ip4.src == 
 && reg2 ==  && udp.src == 67 && udp.dst == 67 && reg9[8]),
 action=(ip4.src=;udp.dst=68;outport=;output; /* DHCP_RELAY_RESP 
*/)
  8. table=19(lr_in_dhcp_relay_resp), priority=1, match=(ip4.src == 
 && reg2 ==  && udp.src == 67 && udp.dst == 67 && reg9[8] 
== 0), action=(drop; /* DHCP_RELAY_RESP */)

Commands to enable the feature
--
  ovn-nbctl create DHCP_Relay name= servers=
  ovn-nbctl set Logical_Router_port  dhcp_relay=
  ovn-nbctl 

[ovs-dev] [PATCH OVN v4 2/4] controller: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-19 Thread Naveen Yerramneni
Added changes in pinctrl to process DHCP Relay opcodes:
  - ACTION_OPCODE_DHCP_RELAY_REQ_CHK: For request packets
  - ACTION_OPCODE_DHCP_RELAY_RESP_CHK: For response packet

Signed-off-by: Naveen Yerramneni 
---
 controller/pinctrl.c | 596 ++-
 lib/ovn-l7.h |   2 +
 2 files changed, 529 insertions(+), 69 deletions(-)

diff --git a/controller/pinctrl.c b/controller/pinctrl.c
index 2d3595cd2..11a5cac62 100644
--- a/controller/pinctrl.c
+++ b/controller/pinctrl.c
@@ -2017,6 +2017,514 @@ is_dhcp_flags_broadcast(ovs_be16 flags)
 return flags & htons(DHCP_BROADCAST_FLAG);
 }
 
+static const char *dhcp_msg_str[] = {
+[0] = "INVALID",
+[DHCP_MSG_DISCOVER] = "DISCOVER",
+[DHCP_MSG_OFFER] = "OFFER",
+[DHCP_MSG_REQUEST] = "REQUEST",
+[OVN_DHCP_MSG_DECLINE] = "DECLINE",
+[DHCP_MSG_ACK] = "ACK",
+[DHCP_MSG_NAK] = "NAK",
+[OVN_DHCP_MSG_RELEASE] = "RELEASE",
+[OVN_DHCP_MSG_INFORM] = "INFORM"
+};
+
+static bool
+dhcp_relay_is_msg_type_supported(uint8_t msg_type)
+{
+return (msg_type >= DHCP_MSG_DISCOVER && msg_type <= OVN_DHCP_MSG_RELEASE);
+}
+
+static const char *dhcp_msg_str_get(uint8_t msg_type)
+{
+if (!dhcp_relay_is_msg_type_supported(msg_type)) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "Unknown DHCP msg type: %u", msg_type);
+return "UNKNOWN";
+}
+return dhcp_msg_str[msg_type];
+}
+
+static const struct dhcp_header *
+dhcp_get_hdr_from_pkt(struct dp_packet *pkt_in, const char **in_dhcp_pptr,
+  const char *end)
+{
+/* Validate the DHCP request packet.
+ * Format of the DHCP packet is
+ * ---
+ *| UDP HEADER | DHCP HEADER | 4 Byte DHCP Cookie | DHCP OPTIONS(var len) |
+ * ---
+ */
+
+*in_dhcp_pptr = dp_packet_get_udp_payload(pkt_in);
+if (*in_dhcp_pptr == NULL) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Invalid or incomplete DHCP packet received");
+return NULL;
+}
+
+const struct dhcp_header *dhcp_hdr
+= (const struct dhcp_header *) *in_dhcp_pptr;
+(*in_dhcp_pptr) += sizeof *dhcp_hdr;
+if (*in_dhcp_pptr > end) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Invalid or incomplete DHCP packet received, "
+ "bad data length");
+return NULL;
+}
+
+if (dhcp_hdr->htype != 0x1) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Packet is recieved with "
+"unsupported hardware type");
+return NULL;
+}
+
+if (dhcp_hdr->hlen != 0x6) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Packet is recieved with "
+"unsupported hardware length");
+return NULL;
+}
+
+/* DHCP options follow the DHCP header. The first 4 bytes of the DHCP
+ * options is the DHCP magic cookie followed by the actual DHCP options.
+ */
+ovs_be32 magic_cookie = htonl(DHCP_MAGIC_COOKIE);
+if ((*in_dhcp_pptr) + sizeof magic_cookie > end ||
+get_unaligned_be32((const void *) (*in_dhcp_pptr)) != magic_cookie) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Magic cookie not present in the DHCP packet");
+return NULL;
+}
+
+(*in_dhcp_pptr) += sizeof magic_cookie;
+
+return dhcp_hdr;
+}
+
+static void
+dhcp_parse_options(const char **in_dhcp_pptr, const char *end,
+  const uint8_t **dhcp_msg_type_pptr, ovs_be32 *request_ip_ptr,
+  bool *ipxe_req_ptr, ovs_be32 *server_id_ptr,
+  ovs_be32 *netmask_ptr, ovs_be32 *router_ip_ptr)
+{
+while ((*in_dhcp_pptr) < end) {
+const struct dhcp_opt_header *in_dhcp_opt =
+(const struct dhcp_opt_header *) *in_dhcp_pptr;
+if (in_dhcp_opt->code == DHCP_OPT_END) {
+break;
+}
+if (in_dhcp_opt->code == DHCP_OPT_PAD) {
+(*in_dhcp_pptr) += 1;
+continue;
+}
+(*in_dhcp_pptr) += sizeof *in_dhcp_opt;
+if ((*in_dhcp_pptr) > end) {
+break;
+}
+(*in_dhcp_pptr) += in_dhcp_opt->len;
+if ((*in_dhcp_pptr) > end) {
+break;
+}
+
+switch (in_dhcp_opt->code) {
+case DHCP_OPT_MSG_TYPE:
+if (dhcp_msg_type_pptr && in_dhcp_opt->len == 1) {
+*dhcp_msg_type_pptr = DHCP_OPT_PAYLOAD(in_dhc

[ovs-dev] [PATCH OVN v4 1/4] actions: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-19 Thread Naveen Yerramneni
NEW OVN ACTIONS
---
  1. dhcp_relay_req_chk(, )
   - This action executes on the source node on which the DHCP request 
originated.
   - This action relays the DHCP request coming from client to the server.
 Relay-ip is used to update GIADDR in the DHCP header.
  2. dhcp_relay_resp_chk(, )
   - This action executes on the first node (RC node) which processes
 the DHCP response from the server.
   - This action updates  the destination MAC and destination IP so that 
the response
 can be forwarded to the appropriate node from which request was 
originated.
   - Relay-ip, server-ip are used to validate GIADDR and SERVER ID in the 
DHCP payload.

Signed-off-by: Naveen Yerramneni 
---
 include/ovn/actions.h |  27 
 lib/actions.c | 149 ++
 utilities/ovn-trace.c |  67 +++
 3 files changed, 243 insertions(+)

diff --git a/include/ovn/actions.h b/include/ovn/actions.h
index dcacbb1ff..8d0c6b9fa 100644
--- a/include/ovn/actions.h
+++ b/include/ovn/actions.h
@@ -96,6 +96,8 @@ struct collector_set_ids;
 OVNACT(LOOKUP_ND_IP,  ovnact_lookup_mac_bind_ip) \
 OVNACT(PUT_DHCPV4_OPTS,   ovnact_put_opts)\
 OVNACT(PUT_DHCPV6_OPTS,   ovnact_put_opts)\
+OVNACT(DHCPV4_RELAY_REQ_CHK,  ovnact_dhcp_relay)  \
+OVNACT(DHCPV4_RELAY_RESP_CHK, ovnact_dhcp_relay)  \
 OVNACT(SET_QUEUE, ovnact_set_queue)   \
 OVNACT(DNS_LOOKUP,ovnact_result)  \
 OVNACT(LOG,   ovnact_log) \
@@ -396,6 +398,15 @@ struct ovnact_put_opts {
 size_t n_options;
 };
 
+/* OVNACT_DHCP_RELAY. */
+struct ovnact_dhcp_relay {
+struct ovnact ovnact;
+int family;
+struct expr_field dst;  /* 1-bit destination field. */
+ovs_be32 relay_ipv4;
+ovs_be32 server_ipv4;
+};
+
 /* Valid arguments to SET_QUEUE action.
  *
  * QDISC_MIN_QUEUE_ID is the default queue, so user-defined queues should
@@ -772,6 +783,22 @@ enum action_opcode {
 
 /* multicast group split buffer action. */
 ACTION_OPCODE_MG_SPLIT_BUF,
+
+/* "dhcp_relay_req_chk(relay_ip, server_ip)".
+ *
+ * Arguments follow the action_header, in this format:
+ *   - The 32-bit DHCP relay IP.
+ *   - The 32-bit DHCP server IP.
+ */
+ACTION_OPCODE_DHCP_RELAY_REQ_CHK,
+
+/* "dhcp_relay_resp_chk(relay_ip, server_ip)".
+ *
+ * Arguments follow the action_header, in this format:
+ *   - The 32-bit DHCP relay IP.
+ *   - The 32-bit DHCP server IP.
+ */
+ACTION_OPCODE_DHCP_RELAY_RESP_CHK,
 };
 
 /* Header. */
diff --git a/lib/actions.c b/lib/actions.c
index 71615fc53..d4f4ec2d0 100644
--- a/lib/actions.c
+++ b/lib/actions.c
@@ -2706,6 +2706,149 @@ ovnact_controller_event_free(struct 
ovnact_controller_event *event)
 free_gen_options(event->options, event->n_options);
 }
 
+static void
+format_DHCPV4_RELAY_REQ_CHK(const struct ovnact_dhcp_relay *dhcp_relay,
+struct ds *s)
+{
+expr_field_format(_relay->dst, s);
+ds_put_format(s, " = dhcp_relay_req_chk("IP_FMT", "IP_FMT");",
+  IP_ARGS(dhcp_relay->relay_ipv4),
+  IP_ARGS(dhcp_relay->server_ipv4));
+}
+
+static void
+parse_dhcp_relay_req_chk(struct action_context *ctx,
+   const struct expr_field *dst,
+   struct ovnact_dhcp_relay *dhcp_relay)
+{
+/* Skip dhcp_relay_req_chk( */
+lexer_force_match(ctx->lexer, LEX_T_LPAREN);
+
+/* Validate that the destination is a 1-bit, modifiable field. */
+char *error = expr_type_check(dst, 1, true, ctx->scope);
+if (error) {
+lexer_error(ctx->lexer, "%s", error);
+free(error);
+return;
+}
+dhcp_relay->dst = *dst;
+
+/* Parse relay ip and server ip. */
+if (ctx->lexer->token.format == LEX_F_IPV4) {
+dhcp_relay->family = AF_INET;
+dhcp_relay->relay_ipv4 = ctx->lexer->token.value.ipv4;
+lexer_get(ctx->lexer);
+lexer_match(ctx->lexer, LEX_T_COMMA);
+if (ctx->lexer->token.format == LEX_F_IPV4) {
+dhcp_relay->family = AF_INET;
+dhcp_relay->server_ipv4 = ctx->lexer->token.value.ipv4;
+lexer_get(ctx->lexer);
+} else {
+lexer_syntax_error(ctx->lexer, "expecting IPv4 dhcp server ip");
+return;
+}
+} else {
+  lexer_syntax_error(ctx->lexer, "expecting IPv4 dhcp relay "
+  "and server ips");
+  return;
+}
+lexer_force_match(ctx->lexer, LEX_T_RPAREN);
+}
+
+static void
+encode_DHCPV4_RELAY_REQ_CHK(const struct ovnact_dhcp_relay *dhcp_relay,
+const struct ovnact_encode_pa

[ovs-dev] [PATCH OVN v4 0/4] DHCP Relay Agent support for overlay subnets.

2024-03-19 Thread Naveen Yerramneni
p-add ls0 vif0
 ovn-nbctl lsp-set-addresses vif0  #Only MAC address has to be 
specified when logical ports are created.
 ovn-nbctl lsp-add ls0 lrp1-attachment
 ovn-nbctl lsp-set-type lrp1-attachment router
 ovn-nbctl lsp-set-addresses lrp1-attachment
 ovn-nbctl lsp-set-options lrp1-attachment router-port=lrp1
 ovn-nbctl lr-add lr0
 ovn-nbctl lrp-add lr0 lrp1   #GATEWAY IP is set in 
GIADDR field when relaying the DHCP requests to server.
 ovn-nbctl lrp-add lr0 lrp-ext  
 ovn-nbctl ls-add ls-ext
 ovn-nbctl lsp-add ls-ext lrp-ext-attachment
 ovn-nbctl lsp-set-type lrp-ext-attachment router
 ovn-nbctl lsp-set-addresses lrp-ext-attachment
 ovn-nbctl lsp-set-options lrp-ext-attachment router-port=lrp-ext
 ovn-nbctl lsp-add ls-ext ln_port
 ovn-nbctl lsp-set-addresses ln_port unknown
 ovn-nbctl lsp-set-type ln_port localnet
 ovn-nbctl lsp-set-options ln_port network_name=physnet1
 # Enable DHCP Relay feature
 ovn-nbctl create DHCP_Relay name=dhcp_relay_test servers=
 ovn-nbctl set Logical_Router_port lrp1 dhcp_relay=
 ovn-nbctl set Logical_Switch ls0 
other_config:dhcp_relay_port=lrp1-attachment

Limitations:

  - All OVN features that needs IP address to be configured on logical port 
(like proxy arp, etc) will not be supported for overlay subnets on which DHCP 
relay is enabled.

References:
--
  - rfc1541, rfc1542, rfc2131

V1:
  - First patch.

V2:
  - Addressed review comments from Numan.

V3:
  - Split the patch into series.
  - Addressed review comments from Numan.
  - Updated the match condition for DHCP Relay flows.

V4:
  - Fix sparse errors
  - Reorder patch series

Naveen Yerramneni (4):
  actions: DHCP Relay Agent support for overlay IPv4 subnets.
  controller: DHCP Relay Agent support for overlay IPv4 subnets.
  northd: DHCP Relay Agent support for overlay IPv4 subnets.
  tests: DHCP Relay Agent support for overlay IPv4 subnets.

 controller/pinctrl.c  | 596 +-
 include/ovn/actions.h |  27 ++
 lib/actions.c | 149 +++
 lib/ovn-l7.h  |   2 +
 northd/northd.c   | 265 ++-
 northd/northd.h   |  41 +--
 ovn-nb.ovsschema  |  19 +-
 ovn-nb.xml|  39 +++
 tests/atlocal.in  |   3 +
 tests/ovn-northd.at   |  38 +++
 tests/ovn.at  | 258 +-
 tests/system-ovn.at   | 148 +++
 utilities/ovn-trace.c |  67 +
 13 files changed, 1560 insertions(+), 92 deletions(-)

-- 
2.36.6

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH OVN v3 0/4] DHCP Relay Agent support for overlay subnets.

2024-03-19 Thread Naveen Yerramneni
p-add ls0 vif0
 ovn-nbctl lsp-set-addresses vif0  #Only MAC address has to be 
specified when logical ports are created.
 ovn-nbctl lsp-add ls0 lrp1-attachment
 ovn-nbctl lsp-set-type lrp1-attachment router
 ovn-nbctl lsp-set-addresses lrp1-attachment
 ovn-nbctl lsp-set-options lrp1-attachment router-port=lrp1
 ovn-nbctl lr-add lr0
 ovn-nbctl lrp-add lr0 lrp1   #GATEWAY IP is set in 
GIADDR field when relaying the DHCP requests to server.
 ovn-nbctl lrp-add lr0 lrp-ext  
 ovn-nbctl ls-add ls-ext
 ovn-nbctl lsp-add ls-ext lrp-ext-attachment
 ovn-nbctl lsp-set-type lrp-ext-attachment router
 ovn-nbctl lsp-set-addresses lrp-ext-attachment
 ovn-nbctl lsp-set-options lrp-ext-attachment router-port=lrp-ext
 ovn-nbctl lsp-add ls-ext ln_port
 ovn-nbctl lsp-set-addresses ln_port unknown
 ovn-nbctl lsp-set-type ln_port localnet
 ovn-nbctl lsp-set-options ln_port network_name=physnet1
 # Enable DHCP Relay feature
 ovn-nbctl create DHCP_Relay name=dhcp_relay_test servers=
 ovn-nbctl set Logical_Router_port lrp1 dhcp_relay=
 ovn-nbctl set Logical_Switch ls0 
other_config:dhcp_relay_port=lrp1-attachment

Limitations:

  - All OVN features that needs IP address to be configured on logical port 
(like proxy arp, etc) will not be supported for overlay subnets on which DHCP 
relay is enabled.

References:
--
  - rfc1541, rfc1542, rfc2131

V1:
  - First patch.

V2:
  - Addressed review comments from Numan.

V3:
  - Split the patch into series.
  - Addressed review comments from Numan.
  - Updated the match condition for DHCP Relay flows.

Naveen Yerramneni (4):
  controller: DHCP Relay Agent support for overlay IPv4 subnets.
  actions: DHCP Relay Agent support for overlay IPv4 subnets.
  northd: DHCP Relay Agent support for overlay IPv4 subnets.
  tests: DHCP Relay Agent support for overlay IPv4 subnets.

 controller/pinctrl.c  | 596 +-
 include/ovn/actions.h |  27 ++
 lib/actions.c | 149 +++
 lib/ovn-l7.h  |   2 +
 northd/northd.c   | 265 ++-
 northd/northd.h   |  41 +--
 ovn-nb.ovsschema  |  19 +-
 ovn-nb.xml|  39 +++
 tests/atlocal.in  |   3 +
 tests/ovn-northd.at   |  38 +++
 tests/ovn.at  | 293 +++--
 tests/system-ovn.at   | 148 +++
 utilities/ovn-trace.c |  67 +
 13 files changed, 1576 insertions(+), 111 deletions(-)

-- 
2.36.6

___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] [PATCH OVN v3 3/4] northd: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-19 Thread Naveen Yerramneni
NB SCHEMA CHANGES
-
  1. New DHCP_Relay table
  "DHCP_Relay": {
"columns": {
"name": {"type": "string"},
"servers": {"type": {"key": "string",
   "min": 0,
   "max": 1}},
"external_ids": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}},
"options": {"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"isRoot": true},
  2. New column to Logical_Router_Port table
  "dhcp_relay": {"type": {"key": {"type": "uuid",
"refTable": "DHCP_Relay",
"refType": "strong"},
"min": 0,
"max": 1}},

NEW PIPELINE STAGES
---
Following stage is added for DHCP relay feature.
Some of the flows are fitted into the existing pipeline tages.
  1. lr_in_dhcp_relay_req
   - This stage process the DHCP request packets coming from DHCP clients.
   - DHCP request packets for which dhcp_relay_req_chk action
 (which gets applied in ip input stage) is successful are forwarded to 
DHCP server.
   - DHCP request packets for which dhcp_relay_req_chk action is 
unsuccessful gets dropped.
  2. lr_in_dhcp_relay_resp_chk
   - This stage applied the dhcp_relay_resp_chk action for  DHCP response 
packets coming
 from the DHCP server.
  3. lr_in_dhcp_relay_resp
   - DHCP response packets for which dhcp_relay_resp_chk is sucessful are 
forwarded
 to the DHCP clients.
   - DHCP response packets for which dhcp_relay_resp_chk is unsucessful 
gets dropped.

REGISTRY USAGE
---
  - reg9[7] : To store the result of dhcp_relay_req_chk action.
  - reg9[8] : To store the result of dhcp_relay_resp_chk action.
  - reg2 : To store the original dest ip for DHCP response packets.
   This is required to properly match the packets in
   lr_in_dhcp_relay_resp stage since dhcp_relay_resp_chk action
   changes the dest ip.

FLOWS
-

Following are the flows added when DHCP Relay is configured on one overlay 
subnet,
one additonal flow is added in ls_in_l2_lkup table for each VM part of the 
subnet.

  1. table=27(ls_in_l2_lkup  ), priority=100  , match=(inport ==  
&& eth.src ==  && ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && 
udp.src == 68 && udp.dst == 67),
 action=(eth.dst=;outport=;next;/* DHCP_RELAY_REQ */)
  2. table=3 (lr_in_ip_input ), priority=110  , match=(inport ==  && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && ip.frag == 0 && udp.src == 
68 && udp.dst == 67),
 action=(reg9[7] = dhcp_relay_req_chk(, );next; /* 
DHCP_RELAY_REQ */)
  3. table=3 (lr_in_ip_input ), priority=110  , match=(ip4.src == 
 && ip4.dst ==  && udp.src == 67 && udp.dst == 67), 
action=(next;/* DHCP_RELAY_RESP */)
  4. table=4 (lr_in_dhcp_relay_req), priority=100  , match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[7]),
 action=(ip4.src=;ip4.dst=;udp.src=67;next; /* 
DHCP_RELAY_REQ */)
  5. table=4 (lr_in_dhcp_relay_req), priority=1, match=(inport ==  && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[7] == 0),
 action=(drop; /* DHCP_RELAY_REQ */)
  6. table=18(lr_in_dhcp_relay_resp_chk), priority=100  , match=(ip4.src == 
 && ip4.dst ==  && ip.frag == 0 && udp.src == 67 && udp.dst 
== 67),
 action=(reg2 = ip4.dst;reg9[8] = dhcp_relay_resp_chk(, 
);next;/* DHCP_RELAY_RESP */)
  7. table=19(lr_in_dhcp_relay_resp), priority=100  , match=(ip4.src == 
 && reg2 ==  && udp.src == 67 && udp.dst == 67 && reg9[8]),
 action=(ip4.src=;udp.dst=68;outport=;output; /* DHCP_RELAY_RESP 
*/)
  8. table=19(lr_in_dhcp_relay_resp), priority=1, match=(ip4.src == 
 && reg2 ==  && udp.src == 67 && udp.dst == 67 && reg9[8] 
== 0), action=(drop; /* DHCP_RELAY_RESP */)

Commands to enable the feature
--
  ovn-nbctl create DHCP_Relay name= servers=
  ovn-nbctl set Logical_Router_port  dhcp_relay=
  ovn-nbctl s

[ovs-dev] [PATCH OVN v3 4/4] tests: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-19 Thread Naveen Yerramneni
Added tests for DHCP Relay feature.

Signed-off-by: Naveen Yerramneni 
---
 tests/atlocal.in|   3 +
 tests/ovn-northd.at |  38 ++
 tests/ovn.at| 293 +---
 tests/system-ovn.at | 148 ++
 4 files changed, 462 insertions(+), 20 deletions(-)

diff --git a/tests/atlocal.in b/tests/atlocal.in
index 63d891b89..32d1c374e 100644
--- a/tests/atlocal.in
+++ b/tests/atlocal.in
@@ -187,6 +187,9 @@ fi
 # Set HAVE_DHCPD
 find_command dhcpd
 
+# Set HAVE_DHCLIENT
+find_command dhclient
+
 # Set HAVE_BFDD_BEACON
 find_command bfdd-beacon
 
diff --git a/tests/ovn-northd.at b/tests/ovn-northd.at
index c189d..042b26c41 100644
--- a/tests/ovn-northd.at
+++ b/tests/ovn-northd.at
@@ -12138,6 +12138,44 @@ check_row_count nb:QoS 0
 AT_CLEANUP
 ])
 
+OVN_FOR_EACH_NORTHD_NO_HV([
+AT_SETUP([check DHCP RELAY])
+ovn_start NORTHD_TYPE
+
+check ovn-nbctl ls-add ls0
+check ovn-nbctl lsp-add ls0 ls0-port1
+check ovn-nbctl lsp-set-addresses ls0-port1 02:00:00:00:00:10
+check ovn-nbctl lr-add lr0
+check ovn-nbctl lrp-add lr0 lrp1 02:00:00:00:00:01 192.168.1.1/24
+check ovn-nbctl lsp-add ls0 lrp1-attachment
+check ovn-nbctl lsp-set-type lrp1-attachment router
+check ovn-nbctl lsp-set-addresses lrp1-attachment 00:00:00:00:ff:02
+check ovn-nbctl lsp-set-options lrp1-attachment router-port=lrp1
+check ovn-nbctl lrp-add lr0 lrp-ext 02:00:00:00:00:02 192.168.2.1/24
+
+dhcp_relay=$(ovn-nbctl create DHCP_Relay servers=172.16.1.1)
+check ovn-nbctl set Logical_Router_port lrp1 dhcp_relay=$dhcp_relay
+check ovn-nbctl set Logical_Switch ls0 
other_config:dhcp_relay_port=lrp1-attachment
+
+check ovn-nbctl --wait=sb sync
+
+ovn-sbctl lflow-list > lflows
+AT_CAPTURE_FILE([lflows])
+
+AT_CHECK([grep -e "DHCP_RELAY_" lflows | sed 's/table=../table=??/'], [0], [dnl
+  table=??(lr_in_ip_input ), priority=110  , match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && ip.frag == 0 && udp.src == 
68 && udp.dst == 67), action=(reg9[[7]] = dhcp_relay_req_chk(192.168.1.1, 
172.16.1.1);next; /* DHCP_RELAY_REQ */)
+  table=??(lr_in_ip_input ), priority=110  , match=(ip4.src == 172.16.1.1 
&& ip4.dst == 192.168.1.1 && ip.frag == 0 && udp.src == 67 && udp.dst == 67), 
action=(next;/* DHCP_RELAY_RESP */)
+  table=??(lr_in_dhcp_relay_req), priority=100  , match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[[7]]), 
action=(ip4.src=192.168.1.1;ip4.dst=172.16.1.1;udp.src=67;next; /* 
DHCP_RELAY_REQ */)
+  table=??(lr_in_dhcp_relay_req), priority=1, match=(inport == "lrp1" && 
ip4.src == 0.0.0.0 && ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 
67 && reg9[[7]] == 0), action=(drop; /* DHCP_RELAY_REQ */)
+  table=??(lr_in_dhcp_relay_resp_chk), priority=100  , match=(ip4.src == 
172.16.1.1 && ip4.dst == 192.168.1.1 && udp.src == 67 && udp.dst == 67), 
action=(reg2 = ip4.dst;reg9[[8]] = dhcp_relay_resp_chk(192.168.1.1, 
172.16.1.1);next;/* DHCP_RELAY_RESP */)
+  table=??(lr_in_dhcp_relay_resp), priority=100  , match=(ip4.src == 
172.16.1.1 && reg2 == 192.168.1.1 && udp.src == 67 && udp.dst == 67 && 
reg9[[8]]), action=(ip4.src=192.168.1.1;udp.dst=68;outport="lrp1";output; /* 
DHCP_RELAY_RESP */)
+  table=??(lr_in_dhcp_relay_resp), priority=1, match=(ip4.src == 
172.16.1.1 && reg2 == 192.168.1.1 && udp.src == 67 && udp.dst == 67 && 
reg9[[8]] == 0), action=(drop; /* DHCP_RELAY_RESP */)
+  table=??(ls_in_l2_lkup  ), priority=100  , match=(inport == "ls0-port1" 
&& eth.src == 02:00:00:00:00:10 && ip4.src == 0.0.0.0 && ip4.dst == 
255.255.255.255 && udp.src == 68 && udp.dst == 67), 
action=(eth.dst=02:00:00:00:00:01;outport="lrp1-attachment";next;/* 
DHCP_RELAY_REQ */)
+])
+
+AT_CLEANUP
+])
+
 AT_SETUP([NB_Global and SB_Global incremental processing])
 
 ovn_start
diff --git a/tests/ovn.at b/tests/ovn.at
index 902dd3793..109e19550 100644
--- a/tests/ovn.at
+++ b/tests/ovn.at
@@ -1661,6 +1661,40 @@ reg1[0] = put_dhcp_opts(offerip=1.2.3.4, 
domain_name=1.2.3.4);
 reg1[0] = put_dhcp_opts(offerip=1.2.3.4, domain_search_list=1.2.3.4);
 DHCPv4 option domain_search_list requires string value.
 
+#dhcp_relay_req_chk
+reg9[7] = dhcp_relay_req_chk(192.168.1.1, 172.16.1.1);
+encodes as 
controller(userdata=00.00.00.1c.00.00.00.00.80.01.08.08.00.00.00.07.c0.a8.01.01.ac.10.01.01,pause)
+
+reg9[7] = dhcp_relay_req_chk(192.168.1.1,172.16.1.1);
+formats as reg9[7] = dhcp_relay_req_chk(192.168.1.1, 172.16.1.1);
+encodes as 
controller(userdata=00.00.00.1c.00.00.00.00.80.01.08.08.00.00.00.07.c0.a8.01.01.ac.10.01.01,pause)
+
+r

[ovs-dev] [PATCH OVN v3 1/4] controller: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-19 Thread Naveen Yerramneni
Added changes in pinctrl to process DHCP Relay opcodes:
  - ACTION_OPCODE_DHCP_RELAY_REQ_CHK: For request packets
  - ACTION_OPCODE_DHCP_RELAY_RESP_CHK: For response packet

Signed-off-by: Naveen Yerramneni 
---
 controller/pinctrl.c | 596 ++-
 lib/ovn-l7.h |   2 +
 2 files changed, 529 insertions(+), 69 deletions(-)

diff --git a/controller/pinctrl.c b/controller/pinctrl.c
index 98b29de9f..f70d08796 100644
--- a/controller/pinctrl.c
+++ b/controller/pinctrl.c
@@ -1909,6 +1909,514 @@ is_dhcp_flags_broadcast(ovs_be16 flags)
 return flags & htons(DHCP_BROADCAST_FLAG);
 }
 
+static const char *dhcp_msg_str[] = {
+[0] = "INVALID",
+[DHCP_MSG_DISCOVER] = "DISCOVER",
+[DHCP_MSG_OFFER] = "OFFER",
+[DHCP_MSG_REQUEST] = "REQUEST",
+[OVN_DHCP_MSG_DECLINE] = "DECLINE",
+[DHCP_MSG_ACK] = "ACK",
+[DHCP_MSG_NAK] = "NAK",
+[OVN_DHCP_MSG_RELEASE] = "RELEASE",
+[OVN_DHCP_MSG_INFORM] = "INFORM"
+};
+
+static bool
+dhcp_relay_is_msg_type_supported(uint8_t msg_type)
+{
+return (msg_type >= DHCP_MSG_DISCOVER && msg_type <= OVN_DHCP_MSG_RELEASE);
+}
+
+static const char *dhcp_msg_str_get(uint8_t msg_type)
+{
+if (!dhcp_relay_is_msg_type_supported(msg_type)) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "Unknown DHCP msg type: %u", msg_type);
+return "UNKNOWN";
+}
+return dhcp_msg_str[msg_type];
+}
+
+static const struct dhcp_header *
+dhcp_get_hdr_from_pkt(struct dp_packet *pkt_in, const char **in_dhcp_pptr,
+  const char *end)
+{
+/* Validate the DHCP request packet.
+ * Format of the DHCP packet is
+ * ---
+ *| UDP HEADER | DHCP HEADER | 4 Byte DHCP Cookie | DHCP OPTIONS(var len) |
+ * ---
+ */
+
+*in_dhcp_pptr = dp_packet_get_udp_payload(pkt_in);
+if (*in_dhcp_pptr == NULL) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Invalid or incomplete DHCP packet received");
+return NULL;
+}
+
+const struct dhcp_header *dhcp_hdr
+= (const struct dhcp_header *) *in_dhcp_pptr;
+(*in_dhcp_pptr) += sizeof *dhcp_hdr;
+if (*in_dhcp_pptr > end) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Invalid or incomplete DHCP packet received, "
+ "bad data length");
+return NULL;
+}
+
+if (dhcp_hdr->htype != 0x1) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Packet is recieved with "
+"unsupported hardware type");
+return NULL;
+}
+
+if (dhcp_hdr->hlen != 0x6) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Packet is recieved with "
+"unsupported hardware length");
+return NULL;
+}
+
+/* DHCP options follow the DHCP header. The first 4 bytes of the DHCP
+ * options is the DHCP magic cookie followed by the actual DHCP options.
+ */
+ovs_be32 magic_cookie = htonl(DHCP_MAGIC_COOKIE);
+if ((*in_dhcp_pptr) + sizeof magic_cookie > end ||
+get_unaligned_be32((const void *) (*in_dhcp_pptr)) != magic_cookie) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Magic cookie not present in the DHCP packet");
+return NULL;
+}
+
+(*in_dhcp_pptr) += sizeof magic_cookie;
+
+return dhcp_hdr;
+}
+
+static void
+dhcp_parse_options(const char **in_dhcp_pptr, const char *end,
+  const uint8_t **dhcp_msg_type_pptr, ovs_be32 *request_ip_ptr,
+  bool *ipxe_req_ptr, ovs_be32 *server_id_ptr,
+  ovs_be32 *netmask_ptr, ovs_be32 *router_ip_ptr)
+{
+while ((*in_dhcp_pptr) < end) {
+const struct dhcp_opt_header *in_dhcp_opt =
+(const struct dhcp_opt_header *) *in_dhcp_pptr;
+if (in_dhcp_opt->code == DHCP_OPT_END) {
+break;
+}
+if (in_dhcp_opt->code == DHCP_OPT_PAD) {
+(*in_dhcp_pptr) += 1;
+continue;
+}
+(*in_dhcp_pptr) += sizeof *in_dhcp_opt;
+if ((*in_dhcp_pptr) > end) {
+break;
+}
+(*in_dhcp_pptr) += in_dhcp_opt->len;
+if ((*in_dhcp_pptr) > end) {
+break;
+}
+
+switch (in_dhcp_opt->code) {
+case DHCP_OPT_MSG_TYPE:
+if (dhcp_msg_type_pptr && in_dhcp_opt->len == 1) {
+*dhcp_msg_type_pptr = DHCP_OPT_PAYLOAD(in_dhc

[ovs-dev] [PATCH OVN v3 2/4] actions: DHCP Relay Agent support for overlay IPv4 subnets.

2024-03-19 Thread Naveen Yerramneni
NEW OVN ACTIONS
---
  1. dhcp_relay_req_chk(, )
   - This action executes on the source node on which the DHCP request 
originated.
   - This action relays the DHCP request coming from client to the server.
 Relay-ip is used to update GIADDR in the DHCP header.
  2. dhcp_relay_resp_chk(, )
   - This action executes on the first node (RC node) which processes
 the DHCP response from the server.
   - This action updates  the destination MAC and destination IP so that 
the response
 can be forwarded to the appropriate node from which request was 
originated.
   - Relay-ip, server-ip are used to validate GIADDR and SERVER ID in the 
DHCP payload.

Signed-off-by: Naveen Yerramneni 
---
 include/ovn/actions.h |  27 
 lib/actions.c | 149 ++
 2 files changed, 176 insertions(+)

diff --git a/include/ovn/actions.h b/include/ovn/actions.h
index 49fb96fc6..a8e4393ed 100644
--- a/include/ovn/actions.h
+++ b/include/ovn/actions.h
@@ -95,6 +95,8 @@ struct collector_set_ids;
 OVNACT(LOOKUP_ND_IP,  ovnact_lookup_mac_bind_ip) \
 OVNACT(PUT_DHCPV4_OPTS,   ovnact_put_opts)\
 OVNACT(PUT_DHCPV6_OPTS,   ovnact_put_opts)\
+OVNACT(DHCPV4_RELAY_REQ_CHK,  ovnact_dhcp_relay)  \
+OVNACT(DHCPV4_RELAY_RESP_CHK, ovnact_dhcp_relay)  \
 OVNACT(SET_QUEUE, ovnact_set_queue)   \
 OVNACT(DNS_LOOKUP,ovnact_result)  \
 OVNACT(LOG,   ovnact_log) \
@@ -395,6 +397,15 @@ struct ovnact_put_opts {
 size_t n_options;
 };
 
+/* OVNACT_DHCP_RELAY. */
+struct ovnact_dhcp_relay {
+struct ovnact ovnact;
+int family;
+struct expr_field dst;  /* 1-bit destination field. */
+ovs_be32 relay_ipv4;
+ovs_be32 server_ipv4;
+};
+
 /* Valid arguments to SET_QUEUE action.
  *
  * QDISC_MIN_QUEUE_ID is the default queue, so user-defined queues should
@@ -758,6 +769,22 @@ enum action_opcode {
 
 /* multicast group split buffer action. */
 ACTION_OPCODE_MG_SPLIT_BUF,
+
+/* "dhcp_relay_req_chk(relay_ip, server_ip)".
+ *
+ * Arguments follow the action_header, in this format:
+ *   - The 32-bit DHCP relay IP.
+ *   - The 32-bit DHCP server IP.
+ */
+ACTION_OPCODE_DHCP_RELAY_REQ_CHK,
+
+/* "dhcp_relay_resp_chk(relay_ip, server_ip)".
+ *
+ * Arguments follow the action_header, in this format:
+ *   - The 32-bit DHCP relay IP.
+ *   - The 32-bit DHCP server IP.
+ */
+ACTION_OPCODE_DHCP_RELAY_RESP_CHK,
 };
 
 /* Header. */
diff --git a/lib/actions.c b/lib/actions.c
index a45874dfb..d55b5153f 100644
--- a/lib/actions.c
+++ b/lib/actions.c
@@ -2680,6 +2680,149 @@ ovnact_controller_event_free(struct 
ovnact_controller_event *event)
 free_gen_options(event->options, event->n_options);
 }
 
+static void
+format_DHCPV4_RELAY_REQ_CHK(const struct ovnact_dhcp_relay *dhcp_relay,
+struct ds *s)
+{
+expr_field_format(_relay->dst, s);
+ds_put_format(s, " = dhcp_relay_req_chk("IP_FMT", "IP_FMT");",
+  IP_ARGS(dhcp_relay->relay_ipv4),
+  IP_ARGS(dhcp_relay->server_ipv4));
+}
+
+static void
+parse_dhcp_relay_req_chk(struct action_context *ctx,
+   const struct expr_field *dst,
+   struct ovnact_dhcp_relay *dhcp_relay)
+{
+/* Skip dhcp_relay_req_chk( */
+lexer_force_match(ctx->lexer, LEX_T_LPAREN);
+
+/* Validate that the destination is a 1-bit, modifiable field. */
+char *error = expr_type_check(dst, 1, true, ctx->scope);
+if (error) {
+lexer_error(ctx->lexer, "%s", error);
+free(error);
+return;
+}
+dhcp_relay->dst = *dst;
+
+/* Parse relay ip and server ip. */
+if (ctx->lexer->token.format == LEX_F_IPV4) {
+dhcp_relay->family = AF_INET;
+dhcp_relay->relay_ipv4 = ctx->lexer->token.value.ipv4;
+lexer_get(ctx->lexer);
+lexer_match(ctx->lexer, LEX_T_COMMA);
+if (ctx->lexer->token.format == LEX_F_IPV4) {
+dhcp_relay->family = AF_INET;
+dhcp_relay->server_ipv4 = ctx->lexer->token.value.ipv4;
+lexer_get(ctx->lexer);
+} else {
+lexer_syntax_error(ctx->lexer, "expecting IPv4 dhcp server ip");
+return;
+}
+} else {
+  lexer_syntax_error(ctx->lexer, "expecting IPv4 dhcp relay "
+  "and server ips");
+  return;
+}
+lexer_force_match(ctx->lexer, LEX_T_RPAREN);
+}
+
+static void
+encode_DHCPV4_RELAY_REQ_CHK(const struct ovnact_dhcp_relay *dhcp_relay,
+const struct ovnact_encode_params *ep,
+struct ofpbuf *ofpacts)
+{
+struct mf_subfie

[meta-intel] [meta-intel-qat][PATCH] layer.conf: update LAYERSERIES_COMPAT to use scarthgap

2024-03-19 Thread Naveen Saini
Remove support for old releases.

Signed-off-by: Naveen Saini 
---
 conf/layer.conf | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/conf/layer.conf b/conf/layer.conf
index a5068dd..ab1683f 100644
--- a/conf/layer.conf
+++ b/conf/layer.conf
@@ -14,7 +14,7 @@ LAYERDEPENDS_intel-qat = "core"
 # This should only be incremented on significant changes that will
 # cause compatibility issues with other layers
 LAYERVERSION_intel-qat = "1"
-LAYERSERIES_COMPAT_intel-qat = "dunfell kirkstone mickledore nanbield"
+LAYERSERIES_COMPAT_intel-qat = "kirkstone scarthgap"
 
 
 require ${LAYERDIR}/conf/include/maintainers.inc
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8263): 
https://lists.yoctoproject.org/g/meta-intel/message/8263
Mute This Topic: https://lists.yoctoproject.org/mt/105020115/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [meta-zephyr][PATCH] layer.conf: update LAYERSERIES_COMPAT to use scarthgap

2024-03-17 Thread Naveen Saini
Drop compatibility to nanbield.

Signed-off-by: Naveen Saini 
---
 meta-zephyr-bsp/conf/layer.conf  | 2 +-
 meta-zephyr-core/conf/layer.conf | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/meta-zephyr-bsp/conf/layer.conf b/meta-zephyr-bsp/conf/layer.conf
index 1edcb0b..28577b2 100644
--- a/meta-zephyr-bsp/conf/layer.conf
+++ b/meta-zephyr-bsp/conf/layer.conf
@@ -15,4 +15,4 @@ LAYERVERSION_zephyrbsp = "1"
 
 LAYERDEPENDS_zephyrbsp = "zephyrcore core meta-python"
 
-LAYERSERIES_COMPAT_zephyrbsp = "kirkstone nanbield"
+LAYERSERIES_COMPAT_zephyrbsp = "kirkstone scarthgap"
diff --git a/meta-zephyr-core/conf/layer.conf b/meta-zephyr-core/conf/layer.conf
index 06e942e..e1bb263 100644
--- a/meta-zephyr-core/conf/layer.conf
+++ b/meta-zephyr-core/conf/layer.conf
@@ -15,7 +15,7 @@ LAYERVERSION_zephyrcore = "1"
 
 LAYERDEPENDS_zephyrcore = "core meta-python"
 
-LAYERSERIES_COMPAT_zephyrcore = "kirkstone nanbield"
+LAYERSERIES_COMPAT_zephyrcore = "kirkstone scarthgap"
 
 PYTHON3_NATIVE_SITEPACKAGES_DIR = 
"${libdir_native}/${PYTHON3_DIR}/site-packages"
 
-- 
2.37.3


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#62776): https://lists.yoctoproject.org/g/yocto/message/62776
Mute This Topic: https://lists.yoctoproject.org/mt/104996550/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [ovs-dev] [PATCH OVN] Add support to make fdb table local to the chassis.

2024-03-14 Thread Naveen Yerramneni


> On 14-Mar-2024, at 9:07 PM, Dumitru Ceara  wrote:
> 
> On 3/14/24 15:21, Naveen Yerramneni wrote:
>> 
>> 
>>> On 08-Mar-2024, at 2:37 PM, Ales Musil  wrote:
>>> 
>>> 
>>> 
>>> On Wed, Mar 6, 2024 at 8:24 PM Naveen Yerramneni 
>>>  wrote:
>>> 
>>> 
>>>> On 18-Dec-2023, at 8:53 PM, Dumitru Ceara  wrote:
>>>> 
>>>> On 12/18/23 16:17, Naveen Yerramneni wrote:
>>>>> 
>>>>> 
>>>>>> On 18-Dec-2023, at 7:26 PM, Dumitru Ceara  wrote:
>>>>>> 
>>>>>> On 11/30/23 16:32, Dumitru Ceara wrote:
>>>>>>> On 11/30/23 15:54, Naveen Yerramneni wrote:
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> On 30-Nov-2023, at 6:06 PM, Dumitru Ceara  wrote:
>>>>>>>>> 
>>>>>>>>> On 11/30/23 09:45, Naveen Yerramneni wrote:
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>>> On 29-Nov-2023, at 2:24 PM, Dumitru Ceara  wrote:
>>>>>>>>>>> 
>>>>>>>>>>> On 11/29/23 07:45, naveen.yerramneni wrote:
>>>>>>>>>>>> This functionality can be enabled at the logical switch level:
>>>>>>>>>>>> - "other_config:fdb_local" can be used to enable/disable this
>>>>>>>>>>>> functionality, it is disabled by default.
>>>>>>>>>>>> - "other_config:fdb_local_idle_timeout" sepcifies idle timeout
>>>>>>>>>>>> for locally learned fdb flows, default timeout is 300 secs.
>>>>>>>>>>>> 
>>>>>>>>>>>> If enabled, below lflow is added for each port that has unknown 
>>>>>>>>>>>> addr set.
>>>>>>>>>>>> - table=2 (ls_in_lookup_fdb), priority=100, match=(inport == 
>>>>>>>>>>>> ),
>>>>>>>>>>>> action=(commit_fdb_local(timeout=); next;
>>>>>>>>>>>> 
>>>>>>>>>>>> New OVN action: "commit_fdb_local". This sets following OVS action.
>>>>>>>>>>>> - 
>>>>>>>>>>>> learn(table=71,idle_timeout=,delete_learned,OXM_OF_METADATA[],
>>>>>>>>>>>>  
>>>>>>>>>>>> NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:NXM_NX_REG14[]->NXM_NX_REG15[])
>>>>>>>>>>>> 
>>>>>>>>>>>> This is useful when OVN is managing VLAN network that has multiple 
>>>>>>>>>>>> ports
>>>>>>>>>>>> set with unknown addr and localnet_learn_fdb is enabled. With this 
>>>>>>>>>>>> config,
>>>>>>>>>>>> if there is east-west traffic flowing between VMs part of same VLAN
>>>>>>>>>>>> deployed on different hypervisors then, MAC addrs of the source and
>>>>>>>>>>>> destination VMs keeps flapping between VM port and localnet port 
>>>>>>>>>>>> in 
>>>>>>>>>>>> Southbound FDB table. Enabling fdb_local config makes fdb table 
>>>>>>>>>>>> local to
>>>>>>>>>>>> the chassis and avoids MAC flapping.
>>>>>>>>>>>> 
>>>>>>>>>>>> Signed-off-by: Naveen Yerramneni 
>>>>>>>>>>>> ---
>>>>>>>>>>> 
>>>>>>>>>>> Hi Naveen,
>>>>>>>>>>> 
>>>>>>>>>>> Thanks a lot for the patch!
>>>>>>>>>>> 
>>>>>>>>>>> Just a note, we already have a fix for the east-west traffic that 
>>>>>>>>>>> causes
>>>>>>>>>>> FDB flapping when localnet is used:
>>>>>>>>>>> 
>>>>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ovn-2Dorg_ovn_commit_2acf91e9628e9481c48e4a6cec8ad5159fdd6d2e=DwICaQ=s883GpUCOChKOHiocYtGcg=2PQjSDR7A28z1kXE1ptSm6X36oL_nCq1XxeEt7FkLmA=kPuq992rikXYk63APGxlIpfqY3lPpreN9f4ha9pZ

Re: [ovs-dev] [PATCH OVN] Add support to make fdb table local to the chassis.

2024-03-14 Thread Naveen Yerramneni


> On 08-Mar-2024, at 2:37 PM, Ales Musil  wrote:
> 
> 
> 
> On Wed, Mar 6, 2024 at 8:24 PM Naveen Yerramneni 
>  wrote:
> 
> 
> > On 18-Dec-2023, at 8:53 PM, Dumitru Ceara  wrote:
> > 
> > On 12/18/23 16:17, Naveen Yerramneni wrote:
> >> 
> >> 
> >>> On 18-Dec-2023, at 7:26 PM, Dumitru Ceara  wrote:
> >>> 
> >>> On 11/30/23 16:32, Dumitru Ceara wrote:
> >>>> On 11/30/23 15:54, Naveen Yerramneni wrote:
> >>>>> 
> >>>>> 
> >>>>>> On 30-Nov-2023, at 6:06 PM, Dumitru Ceara  wrote:
> >>>>>> 
> >>>>>> On 11/30/23 09:45, Naveen Yerramneni wrote:
> >>>>>>> 
> >>>>>>> 
> >>>>>>>> On 29-Nov-2023, at 2:24 PM, Dumitru Ceara  wrote:
> >>>>>>>> 
> >>>>>>>> On 11/29/23 07:45, naveen.yerramneni wrote:
> >>>>>>>>> This functionality can be enabled at the logical switch level:
> >>>>>>>>> - "other_config:fdb_local" can be used to enable/disable this
> >>>>>>>>> functionality, it is disabled by default.
> >>>>>>>>> - "other_config:fdb_local_idle_timeout" sepcifies idle timeout
> >>>>>>>>> for locally learned fdb flows, default timeout is 300 secs.
> >>>>>>>>> 
> >>>>>>>>> If enabled, below lflow is added for each port that has unknown 
> >>>>>>>>> addr set.
> >>>>>>>>> - table=2 (ls_in_lookup_fdb), priority=100, match=(inport == 
> >>>>>>>>> ),
> >>>>>>>>> action=(commit_fdb_local(timeout=); next;
> >>>>>>>>> 
> >>>>>>>>> New OVN action: "commit_fdb_local". This sets following OVS action.
> >>>>>>>>> - 
> >>>>>>>>> learn(table=71,idle_timeout=,delete_learned,OXM_OF_METADATA[],
> >>>>>>>>>   
> >>>>>>>>> NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:NXM_NX_REG14[]->NXM_NX_REG15[])
> >>>>>>>>> 
> >>>>>>>>> This is useful when OVN is managing VLAN network that has multiple 
> >>>>>>>>> ports
> >>>>>>>>> set with unknown addr and localnet_learn_fdb is enabled. With this 
> >>>>>>>>> config,
> >>>>>>>>> if there is east-west traffic flowing between VMs part of same VLAN
> >>>>>>>>> deployed on different hypervisors then, MAC addrs of the source and
> >>>>>>>>> destination VMs keeps flapping between VM port and localnet port in 
> >>>>>>>>> Southbound FDB table. Enabling fdb_local config makes fdb table 
> >>>>>>>>> local to
> >>>>>>>>> the chassis and avoids MAC flapping.
> >>>>>>>>> 
> >>>>>>>>> Signed-off-by: Naveen Yerramneni 
> >>>>>>>>> ---
> >>>>>>>> 
> >>>>>>>> Hi Naveen,
> >>>>>>>> 
> >>>>>>>> Thanks a lot for the patch!
> >>>>>>>> 
> >>>>>>>> Just a note, we already have a fix for the east-west traffic that 
> >>>>>>>> causes
> >>>>>>>> FDB flapping when localnet is used:
> >>>>>>>> 
> >>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ovn-2Dorg_ovn_commit_2acf91e9628e9481c48e4a6cec8ad5159fdd6d2e=DwICaQ=s883GpUCOChKOHiocYtGcg=2PQjSDR7A28z1kXE1ptSm6X36oL_nCq1XxeEt7FkLmA=kPuq992rikXYk63APGxlIpfqY3lPpreN9f4ha9pZKpodnVgE9KfjEUNozpPUFzUu=LP9_zs2Rj34vMx20ntbu-A3taXqKMJNVH2TLQyOXCh0=
> >>>>>>>>  
> >>>>>>>> 
> >>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ovn-2Dorg_ovn_commit_f3a14907fe2b1ecdcfddfbed595cd097b6efbe14=DwICaQ=s883GpUCOChKOHiocYtGcg=2PQjSDR7A28z1kXE1ptSm6X36oL_nCq1XxeEt7FkLmA=kPuq992rikXYk63APGxlIpfqY3lPpreN9f4ha9pZKpodnVgE9KfjEUNozpPUFzUu=gsUGtjyf9gSOr1LkcCH0O6MB1_tjXi9fuTgwEFgbRx8=
> >>>>>>>>  
> >>>>>>>> 
> >>>>>>>> In general, however, I think it's a very goo

[meta-intel] [PATCH v2] layer.conf: update LAYERSERIES_COMPAT to use scarthgap

2024-03-07 Thread Naveen Saini
Drop compatibility to nanbield.

Signed-off-by: Naveen Saini 
---
 conf/layer.conf | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/conf/layer.conf b/conf/layer.conf
index c90687db..97dfb897 100644
--- a/conf/layer.conf
+++ b/conf/layer.conf
@@ -19,7 +19,7 @@ LAYERRECOMMENDS_intel = "dpdk"
 # This should only be incremented on significant changes that will
 # cause compatibility issues with other layers
 LAYERVERSION_intel = "5"
-LAYERSERIES_COMPAT_intel = "kirkstone nanbield"
+LAYERSERIES_COMPAT_intel = "kirkstone scarthgap"
 
 BBFILES_DYNAMIC += " \
 clang-layer:${LAYERDIR}/dynamic-layers/clang-layer/*/*/*.bb \
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8252): 
https://lists.yoctoproject.org/g/meta-intel/message/8252
Mute This Topic: https://lists.yoctoproject.org/mt/104804633/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[meta-intel] [PATCH] layer.conf: update LAYERSERIES_COMPAT to use scarthgap

2024-03-07 Thread Naveen Saini
Signed-off-by: Naveen Saini 
---
 conf/layer.conf | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/conf/layer.conf b/conf/layer.conf
index c90687db..a464b947 100644
--- a/conf/layer.conf
+++ b/conf/layer.conf
@@ -19,7 +19,7 @@ LAYERRECOMMENDS_intel = "dpdk"
 # This should only be incremented on significant changes that will
 # cause compatibility issues with other layers
 LAYERVERSION_intel = "5"
-LAYERSERIES_COMPAT_intel = "kirkstone nanbield"
+LAYERSERIES_COMPAT_intel = "kirkstone nanbield scarthgap"
 
 BBFILES_DYNAMIC += " \
 clang-layer:${LAYERDIR}/dynamic-layers/clang-layer/*/*/*.bb \
-- 
2.37.3


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#8251): 
https://lists.yoctoproject.org/g/meta-intel/message/8251
Mute This Topic: https://lists.yoctoproject.org/mt/104802674/21656
Group Owner: meta-intel+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/meta-intel/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[jira] [Updated] (IMPALA-12827) Precondition was hit in MutableValidReaderWriteIdList

2024-03-07 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/IMPALA-12827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated IMPALA-12827:
---
Component/s: Catalog

> Precondition was hit in MutableValidReaderWriteIdList
> -
>
> Key: IMPALA-12827
> URL: https://issues.apache.org/jira/browse/IMPALA-12827
> Project: IMPALA
>  Issue Type: Bug
>  Components: Catalog
>Reporter: Csaba Ringhofer
>Assignee: Quanlong Huang
>Priority: Critical
>  Labels: ACID, catalog
> Fix For: Impala 4.4.0
>
>
> The callstack below led to stopping metastore event processor during an abort 
> transaction event:
> {code}
> MetastoreEventsProcessor.java:899] Unexpected exception received while 
> processing event
> Java exception follows:
> java.lang.IllegalStateException
>   at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:486)
>   at 
> org.apache.impala.hive.common.MutableValidReaderWriteIdList.addAbortedWriteIds(MutableValidReaderWriteIdList.java:274)
>   at org.apache.impala.catalog.HdfsTable.addWriteIds(HdfsTable.java:3101)
>   at 
> org.apache.impala.catalog.CatalogServiceCatalog.addWriteIdsToTable(CatalogServiceCatalog.java:3885)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$AbortTxnEvent.addAbortedWriteIdsToTables(MetastoreEvents.java:2775)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$AbortTxnEvent.process(MetastoreEvents.java:2761)
>   at 
> org.apache.impala.catalog.events.MetastoreEvents$MetastoreEvent.processIfEnabled(MetastoreEvents.java:522)
>   at 
> org.apache.impala.catalog.events.MetastoreEventsProcessor.processEvents(MetastoreEventsProcessor.java:1052)
>   at 
> org.apache.impala.catalog.events.MetastoreEventsProcessor.processEvents(MetastoreEventsProcessor.java:881)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:750)
> {code}
> Precondition: 
> https://github.com/apache/impala/blob/2f14fd29c0b47fc2c170a7f0eb1cecaf6b9704f4/fe/src/main/java/org/apache/impala/hive/common/MutableValidReaderWriteIdList.java#L274
> I was not able to reproduce this so far.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org



Re: [ovs-discuss] OVN: Configuring Flow_Table (vswitchd.config.db) settings through OVN

2024-03-07 Thread Naveen Yerramneni via discuss


> On 07-Mar-2024, at 9:10 PM, Numan Siddique  wrote:
> 
> On Wed, Mar 6, 2024 at 9:41 PM Naveen Yerramneni via discuss
>  wrote:
>> 
>> 
>> 
>>> On 07-Mar-2024, at 6:02 AM, Numan Siddique  wrote:
>>> 
>>> On Wed, Mar 6, 2024 at 3:07 PM Naveen Yerramneni via discuss
>>>  wrote:
>>>> 
>>>> Hi All,
>>>> 
>>>> We are exploring the possibility of doing some Flow_Table settings (like 
>>>> classifier optimizations)  through OVN.
>>>> 
>>>> One possible option could be to expose this in ovn-nb config and propagate 
>>>> it to ovn-sb.
>>>> - Add new table with name “Flow_Config” which stores settings (similar to 
>>>> Flow_Table in  vwitchd.conf.db)
>>>> - Add new columns “flow_table_in_settings” and “flow_table_out_settings” 
>>>> in NB_Global and SB_Global tables.
>>>>  The type of these columns is map of : where key 
>>>> is logical pipeline stage number and
>>>> value points to a row entry in Flow_Config table.
>>>> 
>>>> OVN controller uses this information and configures vwitchd.config.db.
>>>> - Flow_Table rows in vswitchd.conf.db are populated using Flow_Config 
>>>> table in southbound.
>>>> - Bridge table's flow_tables column is populated using keys (logical table 
>>>> numbers) in flow_table_in_settings and
>>>> flow_table_out_settings columns of SB_Global table . During configuration, 
>>>> OVN controller adds offset
>>>> OFTABLE_LOG_INGRESS_PIPELINE for ingress tables and 
>>>> OFTABLE_LOG_EGRESS_PIPELINE for egress pipelines.
>>>> 
>>>> Probably a new command can be added to northd to dump the logical switch 
>>>> and logical router
>>>> ingress and egress pipeline stage table names and numbers for reference.
>>>> 
>>>> Please share your thoughts/inputs on this.
>>> 
>>> Generally,  to configure anything which is chassis related,  we have
>>> used the local openvswitch table.  Each ovn-controller would read
>>> that and configure accordingly.  One example is - 
>>> ovn-openflow-probe-interval.
>>> 
>>> Can't we do something similar here ?  I understand that this config
>>> needs to be done on each chassis,  but if it is a one time thing,
>>> then perhaps it should not be a big concern.  Does this approach work for 
>>> you ?
>>> 
>>> Thanks
>>> Numan
>> 
>> Hi Numan,
>> 
>> Thanks for the reply.
>> 
>> The reason why I thought of putting this config in northbound is:
>>  - Logical table numbers and physical table numbers can potentially change 
>> release to release.
>>  - If we have this config in northbound, it is possible to add some 
>> automation in CMS plug-in to reconfigure
>>the flow_table_settings on the new logical table numbers when northd gets 
>> upgraded. CMS plug-in can
>>have its own logic to find out the logical table numbers.
>>Ex: CMS plug-in  can get the logical table numbers either by parsing the 
>> northd new command
>>output that dumps logical pipeline table names and numbers (or) by other 
>> means.
>> 
>> 
>> If the recommendation is to get this done on the chassis side then, I can 
>> think of below alternative.
>>  - Update northd to dump "logical pipeline stage name: logical table number” 
>> in options:logical-table-mapping
>>of SB_Global table.
>>  - Update OVN controller to dump the "logical pipeline stage name: physical 
>> table number" mapping
>>to the external_ids:oftable-mapping of openvswitch table whenever SB 
>> entry get updated. Additionally, we can
>>possibly add to new command to ovn-controller to dump this 
>> oftable-mapping.
>> - Some automation can be done on the chassis side to use the table mapping 
>> information that ovn-controller dumps
>>   and configure the vswitchd.conf.db.
>> 
>> 
>> Please let me know your suggestions.
> 
> Ok.  I was not aware of the "Flow_Table" feature of OVS.  I think it
> makes sense to put the config in Northbound db
> and propagate it down to OVS via ovn-controller.
> 
> This is what I think can be done:
> 
> 1.  Add new Northbound tables - Switch_Pipeline_Config and
> Router_Pipeline_Config.
> 2.  ovn-northd will create rows in these tables for each stage.
> 3.  CMS will set the config for each row (if it wants to)
> 4. ovn-northd will replicate these to Southbo

Can’t addRole using JMX API call

2024-03-07 Thread Naveen kumar
Hi Team,

We are not able to addRole to the address/queues.
Can you please help us with right API call using jolokia endpoint? 

Regards ,
Naveen




[ovs-discuss] OVN: Configuring Flow_Table (vswitchd.config.db) settings through OVN

2024-03-07 Thread Naveen Yerramneni via discuss
Hi All,
 
We are exploring the possibility of doing some Flow_Table settings (like 
classifier optimizations)  through OVN.

One possible option could be to expose this in ovn-nb config and propagate it 
to ovn-sb.
  - Add new table with name “Flow_Config” which stores settings (similar to 
Flow_Table in  vwitchd.conf.db)
  - Add new columns “flow_table_in_settings” and “flow_table_out_settings” in 
NB_Global and SB_Global tables.
The type of these columns is map of : where key is 
logical pipeline stage number and 
   value points to a row entry in Flow_Config table.
 
OVN controller uses this information and configures vwitchd.config.db.
  - Flow_Table rows in vswitchd.conf.db are populated using Flow_Config table 
in southbound.
  - Bridge table's flow_tables column is populated using keys (logical table 
numbers) in flow_table_in_settings and
   flow_table_out_settings columns of SB_Global table . During configuration, 
OVN controller adds offset
  OFTABLE_LOG_INGRESS_PIPELINE for ingress tables and 
OFTABLE_LOG_EGRESS_PIPELINE for egress pipelines.

Probably a new command can be added to northd to dump the logical switch and 
logical router
ingress and egress pipeline stage table names and numbers for reference.

Please share your thoughts/inputs on this.

Thanks,
Naveen
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-discuss] OVN: Configuring Flow_Table (vswitchd.config.db) settings through OVN

2024-03-06 Thread Naveen Yerramneni via discuss


> On 07-Mar-2024, at 6:02 AM, Numan Siddique  wrote:
> 
> On Wed, Mar 6, 2024 at 3:07 PM Naveen Yerramneni via discuss
>  wrote:
>> 
>> Hi All,
>> 
>> We are exploring the possibility of doing some Flow_Table settings (like 
>> classifier optimizations)  through OVN.
>> 
>> One possible option could be to expose this in ovn-nb config and propagate 
>> it to ovn-sb.
>> - Add new table with name “Flow_Config” which stores settings (similar to 
>> Flow_Table in  vwitchd.conf.db)
>> - Add new columns “flow_table_in_settings” and “flow_table_out_settings” in 
>> NB_Global and SB_Global tables.
>>   The type of these columns is map of : where key is 
>> logical pipeline stage number and
>>  value points to a row entry in Flow_Config table.
>> 
>> OVN controller uses this information and configures vwitchd.config.db.
>> - Flow_Table rows in vswitchd.conf.db are populated using Flow_Config table 
>> in southbound.
>> - Bridge table's flow_tables column is populated using keys (logical table 
>> numbers) in flow_table_in_settings and
>>  flow_table_out_settings columns of SB_Global table . During configuration, 
>> OVN controller adds offset
>> OFTABLE_LOG_INGRESS_PIPELINE for ingress tables and 
>> OFTABLE_LOG_EGRESS_PIPELINE for egress pipelines.
>> 
>> Probably a new command can be added to northd to dump the logical switch and 
>> logical router
>> ingress and egress pipeline stage table names and numbers for reference.
>> 
>> Please share your thoughts/inputs on this.
> 
> Generally,  to configure anything which is chassis related,  we have
> used the local openvswitch table.  Each ovn-controller would read
> that and configure accordingly.  One example is - ovn-openflow-probe-interval.
> 
> Can't we do something similar here ?  I understand that this config
> needs to be done on each chassis,  but if it is a one time thing,
> then perhaps it should not be a big concern.  Does this approach work for you 
> ?
> 
> Thanks
> Numan

Hi Numan,

Thanks for the reply.

The reason why I thought of putting this config in northbound is:
  - Logical table numbers and physical table numbers can potentially change 
release to release.
  - If we have this config in northbound, it is possible to add some automation 
in CMS plug-in to reconfigure
the flow_table_settings on the new logical table numbers when northd gets 
upgraded. CMS plug-in can
have its own logic to find out the logical table numbers.
Ex: CMS plug-in  can get the logical table numbers either by parsing the 
northd new command
output that dumps logical pipeline table names and numbers (or) by other 
means.


If the recommendation is to get this done on the chassis side then, I can think 
of below alternative.
  - Update northd to dump "logical pipeline stage name: logical table number” 
in options:logical-table-mapping
of SB_Global table.
  - Update OVN controller to dump the "logical pipeline stage name: physical 
table number" mapping
to the external_ids:oftable-mapping of openvswitch table whenever SB entry 
get updated. Additionally, we can
possibly add to new command to ovn-controller to dump this oftable-mapping.
 - Some automation can be done on the chassis side to use the table mapping 
information that ovn-controller dumps
   and configure the vswitchd.conf.db.
 

Please let me know your suggestions. 

Thanks,
Naveen


>> 
>> Thanks,
>> Naveen
>> ___
>> discuss mailing list
>> disc...@openvswitch.org
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.openvswitch.org_mailman_listinfo_ovs-2Ddiscuss=DwIFaQ=s883GpUCOChKOHiocYtGcg=2PQjSDR7A28z1kXE1ptSm6X36oL_nCq1XxeEt7FkLmA=U830KSvmakLVHWvUoXFV_ohX9oM93MLYKIx1e1QXRv5yv5ftXaYXFm2eWao0W2pd=tyS_z11uBp8uDvvlAJ7bUUVP_Qw5RMp3p3lAac52fm8=

___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


[ovs-discuss] OVN: Configuring Flow_Table (vswitchd.config.db) settings through OVN

2024-03-06 Thread Naveen Yerramneni via discuss
Hi All,

We are exploring the possibility of doing some Flow_Table settings (like 
classifier optimizations)  through OVN.

One possible option could be to expose this in ovn-nb config and propagate it 
to ovn-sb.
 - Add new table with name “Flow_Config” which stores settings (similar to 
Flow_Table in  vwitchd.conf.db)
 - Add new columns “flow_table_in_settings” and “flow_table_out_settings” in 
NB_Global and SB_Global tables.
   The type of these columns is map of : where key is 
logical pipeline stage number and 
  value points to a row entry in Flow_Config table.

OVN controller uses this information and configures vwitchd.config.db.
 - Flow_Table rows in vswitchd.conf.db are populated using Flow_Config table in 
southbound.
 - Bridge table's flow_tables column is populated using keys (logical table 
numbers) in flow_table_in_settings and
  flow_table_out_settings columns of SB_Global table . During configuration, 
OVN controller adds offset
 OFTABLE_LOG_INGRESS_PIPELINE for ingress tables and 
OFTABLE_LOG_EGRESS_PIPELINE for egress pipelines.

Probably a new command can be added to northd to dump the logical switch and 
logical router
ingress and egress pipeline stage table names and numbers for reference.

Please share your thoughts/inputs on this.

Thanks,
Naveen
___
discuss mailing list
disc...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss


Re: [ovs-dev] [PATCH OVN] Add support to make fdb table local to the chassis.

2024-03-06 Thread Naveen Yerramneni



> On 18-Dec-2023, at 8:53 PM, Dumitru Ceara  wrote:
> 
> On 12/18/23 16:17, Naveen Yerramneni wrote:
>> 
>> 
>>> On 18-Dec-2023, at 7:26 PM, Dumitru Ceara  wrote:
>>> 
>>> On 11/30/23 16:32, Dumitru Ceara wrote:
>>>> On 11/30/23 15:54, Naveen Yerramneni wrote:
>>>>> 
>>>>> 
>>>>>> On 30-Nov-2023, at 6:06 PM, Dumitru Ceara  wrote:
>>>>>> 
>>>>>> On 11/30/23 09:45, Naveen Yerramneni wrote:
>>>>>>> 
>>>>>>> 
>>>>>>>> On 29-Nov-2023, at 2:24 PM, Dumitru Ceara  wrote:
>>>>>>>> 
>>>>>>>> On 11/29/23 07:45, naveen.yerramneni wrote:
>>>>>>>>> This functionality can be enabled at the logical switch level:
>>>>>>>>> - "other_config:fdb_local" can be used to enable/disable this
>>>>>>>>> functionality, it is disabled by default.
>>>>>>>>> - "other_config:fdb_local_idle_timeout" sepcifies idle timeout
>>>>>>>>> for locally learned fdb flows, default timeout is 300 secs.
>>>>>>>>> 
>>>>>>>>> If enabled, below lflow is added for each port that has unknown addr 
>>>>>>>>> set.
>>>>>>>>> - table=2 (ls_in_lookup_fdb), priority=100, match=(inport == 
>>>>>>>>> ),
>>>>>>>>> action=(commit_fdb_local(timeout=); next;
>>>>>>>>> 
>>>>>>>>> New OVN action: "commit_fdb_local". This sets following OVS action.
>>>>>>>>> - 
>>>>>>>>> learn(table=71,idle_timeout=,delete_learned,OXM_OF_METADATA[],
>>>>>>>>>   
>>>>>>>>> NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:NXM_NX_REG14[]->NXM_NX_REG15[])
>>>>>>>>> 
>>>>>>>>> This is useful when OVN is managing VLAN network that has multiple 
>>>>>>>>> ports
>>>>>>>>> set with unknown addr and localnet_learn_fdb is enabled. With this 
>>>>>>>>> config,
>>>>>>>>> if there is east-west traffic flowing between VMs part of same VLAN
>>>>>>>>> deployed on different hypervisors then, MAC addrs of the source and
>>>>>>>>> destination VMs keeps flapping between VM port and localnet port in 
>>>>>>>>> Southbound FDB table. Enabling fdb_local config makes fdb table local 
>>>>>>>>> to
>>>>>>>>> the chassis and avoids MAC flapping.
>>>>>>>>> 
>>>>>>>>> Signed-off-by: Naveen Yerramneni 
>>>>>>>>> ---
>>>>>>>> 
>>>>>>>> Hi Naveen,
>>>>>>>> 
>>>>>>>> Thanks a lot for the patch!
>>>>>>>> 
>>>>>>>> Just a note, we already have a fix for the east-west traffic that 
>>>>>>>> causes
>>>>>>>> FDB flapping when localnet is used:
>>>>>>>> 
>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ovn-2Dorg_ovn_commit_2acf91e9628e9481c48e4a6cec8ad5159fdd6d2e=DwICaQ=s883GpUCOChKOHiocYtGcg=2PQjSDR7A28z1kXE1ptSm6X36oL_nCq1XxeEt7FkLmA=kPuq992rikXYk63APGxlIpfqY3lPpreN9f4ha9pZKpodnVgE9KfjEUNozpPUFzUu=LP9_zs2Rj34vMx20ntbu-A3taXqKMJNVH2TLQyOXCh0=
>>>>>>>>  
>>>>>>>> 
>>>>>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ovn-2Dorg_ovn_commit_f3a14907fe2b1ecdcfddfbed595cd097b6efbe14=DwICaQ=s883GpUCOChKOHiocYtGcg=2PQjSDR7A28z1kXE1ptSm6X36oL_nCq1XxeEt7FkLmA=kPuq992rikXYk63APGxlIpfqY3lPpreN9f4ha9pZKpodnVgE9KfjEUNozpPUFzUu=gsUGtjyf9gSOr1LkcCH0O6MB1_tjXi9fuTgwEFgbRx8=
>>>>>>>>  
>>>>>>>> 
>>>>>>>> In general, however, I think it's a very good idea to move the FDB away
>>>>>>>> from the Southbound and make it local to each hypervisor.  That reduces
>>>>>>>> load on the Southbound among other things.
>>>>>>>> 
>>>>>>> 
>>>>>>> Hi Dumitru,
>>>>>>> 
>>>>>>> Thanks for informing about the patches.
>>>>>>> Yes, local FDB reduces load on so

[yocto] [meta-zephyr][PATCH 2/2] zephyr-sdk: Upgrade to version 0.16.5-1

2024-03-03 Thread Naveen Saini
https://github.com/zephyrproject-rtos/sdk-ng/releases/tag/v0.16.5-1

Signed-off-by: Naveen Saini 
---
 .../{zephyr-sdk_0.16.3.bb => zephyr-sdk_0.16.5-1.bb}  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
 rename meta-zephyr-core/recipes-devtools/zephyr-sdk/{zephyr-sdk_0.16.3.bb => 
zephyr-sdk_0.16.5-1.bb} (83%)

diff --git a/meta-zephyr-core/recipes-devtools/zephyr-sdk/zephyr-sdk_0.16.3.bb 
b/meta-zephyr-core/recipes-devtools/zephyr-sdk/zephyr-sdk_0.16.5-1.bb
similarity index 83%
rename from meta-zephyr-core/recipes-devtools/zephyr-sdk/zephyr-sdk_0.16.3.bb
rename to meta-zephyr-core/recipes-devtools/zephyr-sdk/zephyr-sdk_0.16.5-1.bb
index e34424e..0b608bc 100644
--- a/meta-zephyr-core/recipes-devtools/zephyr-sdk/zephyr-sdk_0.16.3.bb
+++ b/meta-zephyr-core/recipes-devtools/zephyr-sdk/zephyr-sdk_0.16.5-1.bb
@@ -14,8 +14,8 @@ SDK_ARCHIVE = "zephyr-sdk-${PV}_linux-${BUILD_ARCH}.tar.xz"
 SDK_NAME = "${BUILD_ARCH}"
 SRC_URI = 
"https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v${PV}/${SDK_ARCHIVE};subdir=${S};name=${SDK_NAME};
 
-SRC_URI[x86_64.sha256sum] = 
"9eb557d09d0e9d4e0b27f81605250a0618bb929e423987ef40167a3307c82262"
-SRC_URI[aarch64.sha256sum] = 
"3acfb4fb68fc5e98f44428249b54c947cdf78f1164176e98160ca75175ad26c1"
+SRC_URI[x86_64.sha256sum] = 
"01f942146d2fc6d6afd5afe6f4b5c315525d2c937c7e613d3312b0992b33bc68"
+SRC_URI[aarch64.sha256sum] = 
"1749b6891a6a6e70b013d8b31ff067c5a94891f651985a6da9a20367b2deb6c7"
 
 do_configure[noexec] = "1"
 do_compile[noexec] = "1"
-- 
2.34.1


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#62659): https://lists.yoctoproject.org/g/yocto/message/62659
Mute This Topic: https://lists.yoctoproject.org/mt/104718135/21656
Group Owner: yocto+ow...@lists.yoctoproject.org
Unsubscribe: https://lists.yoctoproject.org/g/yocto/unsub 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[yocto] [meta-zephyr][PATCH 1/2] zephyr-kernel: Add support for v3.6.0

2024-03-03 Thread Naveen Saini
https://github.com/zephyrproject-rtos/zephyr/releases/tag/v3.6.0

Signed-off-by: Naveen Saini 
---
 ...y-generation-issue-in-cross-compila.patch} |   0
 .../zephyr-kernel/zephyr-kernel-src-3.5.0.inc |   2 +-
 .../zephyr-kernel/zephyr-kernel-src-3.6.0.inc | 270 ++
 .../zephyr-kernel/zephyr-kernel-src.inc   |   2 +-
 4 files changed, 272 insertions(+), 2 deletions(-)
 rename 
meta-zephyr-core/recipes-kernel/zephyr-kernel/files/{0001-3.5-x86-fix-efi-binary-generation-issue-in-cross-compila.patch
 => 0001-x86-fix-efi-binary-generation-issue-in-cross-compila.patch} (100%)
 create mode 100644 
meta-zephyr-core/recipes-kernel/zephyr-kernel/zephyr-kernel-src-3.6.0.inc

diff --git 
a/meta-zephyr-core/recipes-kernel/zephyr-kernel/files/0001-3.5-x86-fix-efi-binary-generation-issue-in-cross-compila.patch
 
b/meta-zephyr-core/recipes-kernel/zephyr-kernel/files/0001-x86-fix-efi-binary-generation-issue-in-cross-compila.patch
similarity index 100%
rename from 
meta-zephyr-core/recipes-kernel/zephyr-kernel/files/0001-3.5-x86-fix-efi-binary-generation-issue-in-cross-compila.patch
rename to 
meta-zephyr-core/recipes-kernel/zephyr-kernel/files/0001-x86-fix-efi-binary-generation-issue-in-cross-compila.patch
diff --git 
a/meta-zephyr-core/recipes-kernel/zephyr-kernel/zephyr-kernel-src-3.5.0.inc 
b/meta-zephyr-core/recipes-kernel/zephyr-kernel/zephyr-kernel-src-3.5.0.inc
index 459fdd8..f91c37d 100644
--- a/meta-zephyr-core/recipes-kernel/zephyr-kernel/zephyr-kernel-src-3.5.0.inc
+++ b/meta-zephyr-core/recipes-kernel/zephyr-kernel/zephyr-kernel-src-3.5.0.inc
@@ -131,7 +131,7 @@ SRC_URI_ZEPHYR_UOSCORE_UEDHOC ?= 
"git://github.com/zephyrproject-rtos/uoscore-ue
 SRC_URI_ZEPHYR_ZCBOR ?= 
"git://github.com/zephyrproject-rtos/zcbor;protocol=https"
 
 SRC_URI_PATCHES ?= "\
-
file://0001-3.5-x86-fix-efi-binary-generation-issue-in-cross-compila.patch;patchdir=zephyr
 \
+
file://0001-x86-fix-efi-binary-generation-issue-in-cross-compila.patch;patchdir=zephyr
 \
 "
 
 SRC_URI = "\
diff --git 
a/meta-zephyr-core/recipes-kernel/zephyr-kernel/zephyr-kernel-src-3.6.0.inc 
b/meta-zephyr-core/recipes-kernel/zephyr-kernel/zephyr-kernel-src-3.6.0.inc
new file mode 100644
index 000..5b09aac
--- /dev/null
+++ b/meta-zephyr-core/recipes-kernel/zephyr-kernel/zephyr-kernel-src-3.6.0.inc
@@ -0,0 +1,270 @@
+# Auto-generated from zephyr-kernel-src.inc.jinja
+
+SRCREV_FORMAT = "default"
+
+SRCREV_default = "468eb56cf242eedba62006ee758700ee6148763f"
+SRCREV_acpica = "da5f2721e1c7f188fe04aa50af76f4b94f3c3ea3"
+SRCREV_bsim = "384a091445c57b44ac8cbd18ebd245b47c71db94"
+SRCREV_babblesim_base = "19d62424c0802c6c9fc15528febe666e40f372a1"
+SRCREV_babblesim_ext_2G4_libPhyComv1 = 
"9018113a362fa6c9e8f4b9cab9e5a8f12cc46b94"
+SRCREV_babblesim_ext_2G4_phy_v1 = "d47c6dd90035b41b14f6921785ccb7b8484868e2"
+SRCREV_babblesim_ext_2G4_channel_NtNcable = 
"20a38c997f507b0aa53817aab3d73a462fff7af1"
+SRCREV_babblesim_ext_2G4_channel_multiatt = 
"bde72a57384dde7a4310bcf3843469401be93074"
+SRCREV_babblesim_ext_2G4_modem_magic = 
"cb70771794f0bf6f262aa474848611c68ae8f1ed"
+SRCREV_babblesim_ext_2G4_modem_BLE_simple = 
"809ab073159c9ab6686c2fea5749b0702e0909f7"
+SRCREV_babblesim_ext_2G4_device_burst_interferer = 
"5b5339351d6e6a2368c686c734dc8b2fc65698fc"
+SRCREV_babblesim_ext_2G4_device_WLAN_actmod = 
"9cb6d8e72695f6b785e57443f0629a18069d6ce4"
+SRCREV_babblesim_ext_2G4_device_playback = 
"85c645929cf1ce995d8537107d9dcbd12ed64036"
+SRCREV_babblesim_ext_libCryptov1 = "eed6d7038e839153e340bd333bc43541cb90ba64"
+SRCREV_cmsis = "4b96cbb174678dcd3ca86e11e1f24bc5f8726da0"
+SRCREV_cmsis-dsp = "6489e771e9c405f1763b52d64a3f17a1ec488ace"
+SRCREV_cmsis-nn = "0c8669d81381ccf3b1a01d699f3b68b50134a99f"
+SRCREV_edtt = "64e5105ad82390164fb73fc654be3f73a608209a"
+SRCREV_fatfs = "427159bf95ea49b7680facffaa29ad506b42709b"
+SRCREV_hal_altera = "0d225ddd314379b32355a00fb669eacf911e750d"
+SRCREV_hal_ambiq = "ff4ca358d730536addf336c40c3faa4ebf1df00a"
+SRCREV_hal_atmel = "aad79bf530b69b72712d18873df4120ad052d921"
+SRCREV_hal_espressif = "67fa60bdffca7ba8ed199aecfaa26f485f24878b"
+SRCREV_hal_ethos_u = "90ada2ea5681b2a2722a10d2898eac34c2510791"
+SRCREV_hal_gigadevice = "2994b7dde8b0b0fa9b9c0ccb13474b6a486cddc3"
+SRCREV_hal_infineon = "69c883d3bd9fac8a18dd8384624b8c472a68d06f"
+SRCREV_hal_intel = "7b4c25669f1513b0d6d6ee78ee42340d91958884"
+SRCREV_hal_microchip = "5d079f1683a00b801373f5d181d4e33b30d5"
+SRCREV_hal_nordic = "dce8519f7da37b0a745237679fd3f88250b495ff"
+SRCREV_hal_nuvoton = "68a91bb343ff47e40dbd9189a7d6e3ee801a7135"
+SRCREV_hal_nxp = "d45b14c198d778658b7853b48378d2e132a6c4be"
+SRCREV_hal_openisa = "eabd5

[ovs-dev] [PATCH OVN v2] DHCP Relay Agent support for overlay subnets.

2024-03-03 Thread Naveen Yerramneni
s vif0  #Only MAC address has to be 
specified when logical ports are created.
 ovn-nbctl lsp-add ls0 lrp1-attachment
 ovn-nbctl lsp-set-type lrp1-attachment router
 ovn-nbctl lsp-set-addresses lrp1-attachment
 ovn-nbctl lsp-set-options lrp1-attachment router-port=lrp1
 ovn-nbctl lr-add lr0
 ovn-nbctl lrp-add lr0 lrp1   #GATEWAY IP is set in 
GIADDR field when relaying the DHCP requests to server.
 ovn-nbctl lrp-add lr0 lrp-ext  
 ovn-nbctl ls-add ls-ext
 ovn-nbctl lsp-add ls-ext lrp-ext-attachment
 ovn-nbctl lsp-set-type lrp-ext-attachment router
 ovn-nbctl lsp-set-addresses lrp-ext-attachment
 ovn-nbctl lsp-set-options lrp-ext-attachment router-port=lrp-ext
 ovn-nbctl lsp-add ls-ext ln_port
     ovn-nbctl lsp-set-addresses ln_port unknown
 ovn-nbctl lsp-set-type ln_port localnet
 ovn-nbctl lsp-set-options ln_port network_name=physnet1
 # Enable DHCP Relay feature
 ovn-nbctl create DHCP_Relay name=dhcp_relay_test servers=
 ovn-nbctl set Logical_Router_port lrp1 dhcp_relay=
 ovn-nbctl set Logical_Switch ls0 
other_config:dhcp_relay_port=lrp1-attachment

Limitations:

  - All OVN features that needs IP address to be configured on logical port 
(like proxy arp, etc) will not be supported for overlay subnets on which DHCP 
relay is enabled.

References:
--
  - rfc1541, rfc1542, rfc2131

Signed-off-by: Naveen Yerramneni 
Co-authored-by: Huzaifa Calcuttawala 
Signed-off-by: Huzaifa Calcuttawala 
CC: Mary Manohar 
---
V2:
  Addressed review comments from Numan. 
---
 controller/pinctrl.c  | 596 +-
 include/ovn/actions.h |  27 ++
 lib/actions.c | 149 +++
 lib/ovn-l7.h  |   2 +
 northd/northd.c   | 265 ++-
 northd/northd.h   |  41 +--
 ovn-nb.ovsschema  |  19 +-
 ovn-nb.xml|  39 +++
 tests/atlocal.in  |   3 +
 tests/ovn-northd.at   |  38 +++
 tests/ovn.at  | 293 +++--
 tests/system-ovn.at   | 148 +++
 utilities/ovn-trace.c |  67 +
 13 files changed, 1576 insertions(+), 111 deletions(-)

diff --git a/controller/pinctrl.c b/controller/pinctrl.c
index 98b29de9f..a776ac7c5 100644
--- a/controller/pinctrl.c
+++ b/controller/pinctrl.c
@@ -1909,6 +1909,514 @@ is_dhcp_flags_broadcast(ovs_be16 flags)
 return flags & htons(DHCP_BROADCAST_FLAG);
 }
 
+static const char *dhcp_msg_str[] = {
+[0] = "INVALID",
+[DHCP_MSG_DISCOVER] = "DISCOVER",
+[DHCP_MSG_OFFER] = "OFFER",
+[DHCP_MSG_REQUEST] = "REQUEST",
+[OVN_DHCP_MSG_DECLINE] = "DECLINE",
+[DHCP_MSG_ACK] = "ACK",
+[DHCP_MSG_NAK] = "NAK",
+[OVN_DHCP_MSG_RELEASE] = "RELEASE",
+[OVN_DHCP_MSG_INFORM] = "INFORM"
+};
+
+static bool
+dhcp_relay_is_msg_type_supported(uint8_t msg_type)
+{
+return (msg_type >= DHCP_MSG_DISCOVER && msg_type <= OVN_DHCP_MSG_RELEASE);
+}
+
+static const char *dhcp_msg_str_get(uint8_t msg_type)
+{
+if (!dhcp_relay_is_msg_type_supported(msg_type)) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "Unknown DHCP msg type: %u", msg_type);
+return "UNKNOWN";
+}
+return dhcp_msg_str[msg_type];
+}
+
+static const struct dhcp_header *
+dhcp_get_hdr_from_pkt(struct dp_packet *pkt_in, const char **in_dhcp_pptr,
+  const char *end)
+{
+/* Validate the DHCP request packet.
+ * Format of the DHCP packet is
+ * 
+ *| UDP HEADER  | DHCP HEADER  | 4 Byte DHCP Cookie | DHCP OPTIONS(var 
len)|
+ * 
+ */
+
+*in_dhcp_pptr = dp_packet_get_udp_payload(pkt_in);
+if (*in_dhcp_pptr == NULL) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Invalid or incomplete DHCP packet received");
+return NULL;
+}
+
+const struct dhcp_header *dhcp_hdr
+= (const struct dhcp_header *) *in_dhcp_pptr;
+(*in_dhcp_pptr) += sizeof *dhcp_hdr;
+if (*in_dhcp_pptr > end) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Invalid or incomplete DHCP packet received, "
+ "bad data length");
+return NULL;
+}
+
+if (dhcp_hdr->htype != 0x1) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Packet is recieved with "
+"unsupported hardware type");
+return NULL;
+}
+
+if (dhcp_hdr->hlen != 0x6) {
+static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1, 5);
+VLOG_WARN_RL(, "DHCP: Packet is recieved with "
+ 

[jira] [Created] (HIVE-28101) [Athena] Add connector for Amazon Athena

2024-02-29 Thread Naveen Gangam (Jira)
Naveen Gangam created HIVE-28101:


 Summary: [Athena] Add connector for Amazon Athena
 Key: HIVE-28101
 URL: https://issues.apache.org/jira/browse/HIVE-28101
 Project: Hive
  Issue Type: Sub-task
  Components: Standalone Metastore
Reporter: Naveen Gangam


Recent added a HIVEJDBC connector for Hive to Hive over JDBC. This seems to 
also work for Hive to EMR with a local catalog. Does not seem to work with EMR 
backed with a AWS Glue Catalog. 

Just filing this jira to assess the need for a connector implementation for 
Amazon Athena with Glue Catalog.

[~zhangbutao] What do you think? I do not have access to a test bed for Athena.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >