Hi All,
This regarding running worker threads with af_packet interface on VM ubuntu
16.10.
I have tried configuring the af_packet interface with cpu worker thread
support but I am getting assertion fail not sure if it is due to worker
thread getting scheduled on core 0.
Config :
---
sta
Hi, VPP crew!
As i know we can use VPP as traffic shaper (or not?).
E.g. I need to restrict in/out speed to subscriber with private address
192.168.2.10 to 5mbps (local if GigabitEthernet0/5/0, external if
GigabitEthernet0/6/0).
How we can do it?
Thanks!
--
Yours sincerely,
Denis Lotarev_
Hi, Ole!PPTP connection working well via Hairpin NAT 1:1.Thanks!
--
Yours sincerely,
Denis Lotarev
On Tuesday, June 20, 2017, 5:07:48 PM GMT+5, Ole Troan
wrote:
Denis,
Matus found the issue with hairpinning. Merged fix in
https://gerrit.fd.io/r/#/c/7200/
Please let me know if that also fix
All updates are complete. FD.io systems are available.
Thank you for your patience,
Vanessa
On 06/20/2017 05:04 PM, Vanessa Valderrama wrote:
> Starting FD.io updates and reboot all systems due to critical security
> vulnerabilities reported in both Linux kernel and glibc.
>
> Downtime: Approx
Starting FD.io updates and reboot all systems due to critical security
vulnerabilities reported in both Linux kernel and glibc.
Downtime: Approximately 30 minutes
Thank you,
Vanessa
On 06/19/2017 05:55 PM, Ed Warnicke wrote:
> Looping in the broader community as various release branch cutting
Hi, Oleg!
Today we had issue with one more subscriber under iptables NAT on linux
4.4.35-1-lts. More than one subscriber cannot connecting to any PPTP servers.
We must to loaded two modules nf_nat_pptp and nf_conntrack_pptp. After this
subscribers connect to their servers successfully.
FIY, Linu
We usually put the most commonly traversed graph arc at index 0, but it hardly
matters.
The default value of node->cached_next_index [=zero] is used exactly once in
recorded history...
Thanks... Dave
From: Dharmaray Kundargi [mailto:dharmaray.kunda...@mavenir.com]
Sent: Tuesday, June 20, 2017
Thanks Dave, That hints what must be at the 0th index in .next_nodes of a node.
Regards
Dharmaray
From: Dave Barach (dbarach) [mailto:dbar...@cisco.com]
Sent: Tuesday, June 20, 2017 6:41 PM
To: Dharmaray Kundargi ; vpp-dev@lists.fd.io
Subject: RE: Default next node for a node.
node->cached_next_
It seems that extraction of 5-tuples from packet which has vlan tag does
not done correctly. Is it correct?
On Tue, Jun 20, 2017 at 8:16 PM, Ehsan Shahrokhi wrote:
> thank you Andrew for you quick response.
> sorry, I forgot correcting acl_interface_list_dump output. I do know
> acl_interface_l
thank you Andrew for you quick response.
sorry, I forgot correcting acl_interface_list_dump output. I do know
acl_interface_list_dump has problem in showing acl index list as input and
it's output is wrong. Indeed I had set acl list on interfaces in output
mode.
the corrected configuration is shown
Dear All,
Please be aware that this patch:
https://gerrit.fd.io/r/#/c/7138/
has made a minor change to the map.api
Thanks,
neale
-Original Message-
From: "Damjan Marion (Code Review)"
Reply-To: "dmarion.li...@gmail.com"
Date: Mond
Hello Anton,
Thanks for the fast response. We will check local firewall setting as you
proposed.
Regards,
Jan
-Original Message-
From: Anton Baranov via RT [mailto:fdio-helpd...@rt.linuxfoundation.org]
Sent: Tuesday, June 20, 2017 17:13
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIE
Hello Anton,
Thanks for the fast response. We will check local firewall setting as you
proposed.
Regards,
Jan
-Original Message-
From: Anton Baranov via RT [mailto:fdio-helpd...@rt.linuxfoundation.org]
Sent: Tuesday, June 20, 2017 17:13
To: Jan Gelety -X (jgelety - PANTHEON TECHNOLOGIE
Jan:
This is what I got from fdio jenkins server (i did the tests with
10.30.{52,53}.2 hosts:
$ ip ro get 10.30.52.2
10.30.52.2 via 10.30.48.1 dev eth0 src 10.30.48.5
cache
The traffic is going directly through neutron router.. so we don't block any
traffic on our firewall
$ ping -q -
Hello Vanessa,
Thanks for the info.
Just few remarks:
1. virl1 (10.30.51.28) - nodes of simulations started there are using subnet
10.30.52.0/24 and we are experiencing ssh timeouts in this subnet
2. virl2 (10.30.51.29) - nodes of simulations started there were using subnet
10.30.53.0/24 and
Hello Vanessa,
Thanks for the info.
Just few remarks:
1. virl1 (10.30.51.28) - nodes of simulations started there are using subnet
10.30.52.0/24 and we are experiencing ssh timeouts in this subnet
2. virl2 (10.30.51.29) - nodes of simulations started there were using subnet
10.30.53.0/24 and
node->cached_next_index will be zero at the beginning of time. Depending on the
particulars involved, that might even be correct.
If not, the enqueue_x1/enqueue_x2 macros will fix the erroneous speculative
enqueues.
vlib_put_next_frame(...) maintains node->cached_next_index...
Thanks... Dave
hi Ehsan,
The packet trace confirms the packet gets dropped on egress by ACL#1,
and the dump confirms all of the ACLs are applied on output (n_input =
0 - so the interface does not have). The "output" word is erroneously
not printed. I made a fix for this in
https://gerrit.fd.io/r/#/c/7227/, you
Thank you!
Waiting 2507 master rev and then testing fix.
Wow, you are added new feature in plan. Amazing :)
OFC, i send results after testing 2507 revision.
--
Yours sincerely,
Denis Lotarev
On Tuesday, June 20, 2017, 5:07:48 PM GMT+5, Ole Troan
wrote:
Deni
Denis,
Matus found the issue with hairpinning. Merged fix in
https://gerrit.fd.io/r/#/c/7200/
Please let me know if that also fixes this issue.
We'll do some better handling of fall-back to 3-tuple keys for normal NAPT
mode, so we can support PPTP without configuring 1:1. Hold tight.
https://j
Hi,
Was wondering how the speculated next node is decided for first ever packet to
a node?
In code for every graph node we see the code "next_index =
node->cached_next_index;".
But where is this node->cached_next_index initialized before the node processes
the first ever packet ?
Or does it mea
Hi Andrew,
My topology
client1 --- subif1_router-vppif2---client2
In this configuration subif1 index is 8 and if2 index is 2
vat# acl_dump
vl_api_acl_details_t_handler:198: acl_index: 0, count: 1
ipv4 action 1 src 0.0.0.0/0 dst 0.0.0.0/0 proto 0 sport 1-65535 dport
1-65535
*Hi Andrew,*
*My topology*
* client1 --- subif1_router-vppif2---client2 In this
configuration subif1 index is
Ole, so sorry, we are explored network problem in our infrastructure due
testing with parallel connection to PPTP server B and PPTP server C.
So 2nd scheme works well :) Sorry for my mismatch.But hairpining not working in
3rd scheme. I dumped traffic from Machine A, when Machine B trying to
conn
Im dumped traffic from second destination PPTP server, when Machine A connected
to Machine C in 2nd scheme.
So, Machine A with public IP 2.2.2.2 and destination PPTP server (Machine C)
with public IP 5.5.5.5:
IP (tos 0x0, ttl 61, id 15901, offset 0, flags [DF], proto TCP (6), length 60)
2.2.2
Hi Denis,
Thanks a lot for testing!
> 1st scheme:
> Machine A (inside VPP with 1:1 static mapping) running PPTP _server_.
> Machine B (outside VPP with 1:1 iptables static mapping) running PPTP client.
> This scheme works well.
Splendid.
> 2st scheme:
> Machine A (inside VPP with 1:1 static ma
26 matches
Mail list logo