[Openstack-operators] custom dashboards

2018-02-23 Thread Paras pradhan
What do you guys use to create custom dashboard for openstack? Do you use
django and python openstack sdk  or modify the horizon dashboard?


Thanks
Paras.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] custom build image is slow

2017-08-02 Thread Paras pradhan
It does have the virtio drivers and the image file is qcow2 not raw.  To
narrow down the issue this is what I have done.

1) Download CentOS-7-x86_64-GenericCloud.qcow2 from centos.org. Upload to
glance and while creating a volume in cinder volume.log I see 250MB/sec
download speed. This is normal in our case.

2) Now boot the qcow2 using kvm/virt-manager , add packages, poweroff and
upload it to glance , I see 75MB/sec download speed while creating a
volume. If I do virt sparsify it then drops to 30MB/sec.

Not sure why I am seeing this behaviour.

Thanks
Paras.

On Wed, Aug 2, 2017 at 2:25 AM, Van Leeuwen, Robert 
wrote:

> >> how do we install virtio drivers if its missing? How do I verify it on
> the centos cloud image if its there?
>
>
>
> >Unless it’s a very very ancient unsupported version of centos the virt-io
> drivers will be in the kernel package.
>
> >Do a lsmod and look for virtio to check if it is loaded.
>
>
>
> Forgot to mention: I do not think this will have anything to do with the
> upload speed to cinder even if it is not there.
>
>
>
> Cheers,
>
> Robert van Leeuwen
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Paras pradhan
how do we install virtio drivers if its missing? How do I verify it on the
centos cloud image if its there?

On Tue, Aug 1, 2017 at 12:19 PM, Abel Lopez <alopg...@gmail.com> wrote:

> Your custom image is likely missing the virtIO drivers that the cloud
> image has.
>
> Instead of running through the DVD installer, I'd suggest checking out
> diskimage-builder to make custom images for use on Openstack.
>
> On Tue, Aug 1, 2017 at 10:16 AM Paras pradhan <pradhanpa...@gmail.com>
> wrote:
>
>> Also this is what I've noticed with the centos cloud image I downloaded.
>> If I add few packages( around a GB), the sizes goes up to 8GB from 1.3 GB.
>> Running dd if=/dev/zero of=temp;sync;rm temp failed due to the size of
>> the disk on the cloud images which is 10G. zeroing failed with No space
>> left on device.  Any other options?
>>
>> Thanks
>> Paras.
>>
>> On Tue, Aug 1, 2017 at 9:43 AM, Paras pradhan <pradhanpa...@gmail.com>
>> wrote:
>>
>>> I ran virt-sparsify. So before running virt-sparsify it was 6.0GB and
>>> after it is 1.3GB.
>>>
>>> Thanks
>>> Paras.
>>>
>>> On Tue, Aug 1, 2017 at 2:53 AM, Tomáš Vondra <von...@homeatcloud.cz>
>>> wrote:
>>>
>>>> Hi!
>>>>
>>>> How big are the actual image files? Because qcow2 is a sparse format,
>>>> it does not store zeroes. If the free space in one image is zeroed out, it
>>>> will convert much faster. If that is the problem, use „dd if=/dev/zero
>>>> of=temp;sync;rm temp“ or zerofree.
>>>>
>>>> Tomas
>>>>
>>>>
>>>>
>>>> *From:* Paras pradhan [mailto:pradhanpa...@gmail.com]
>>>> *Sent:* Monday, July 31, 2017 11:54 PM
>>>> *To:* openstack-operators@lists.openstack.org
>>>> *Subject:* [Openstack-operators] custom build image is slow
>>>>
>>>>
>>>>
>>>> Hello
>>>>
>>>>
>>>>
>>>> I have two qcow2 images uploaded to glance. One is CentOS 7 cloud image
>>>> downloaded from centos.org.  The other one is custom built using
>>>> CentOS 7.DVD.  When I create cinder volumes from them, volume creation from
>>>> the custom built image it is very very slow.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> CenOS qcow2:
>>>>
>>>>
>>>>
>>>> 2017-07-31 21:42:44.287 881609 INFO cinder.image.image_utils
>>>> [req-ea2d7b12-ae9e-45b2-8b4b-ea8465497d5a
>>>> e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf
>>>> 490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d
>>>> default] *Converted 8192.00 MB image at 253.19 MB/s*
>>>>
>>>>
>>>>
>>>> Custom built qcow2:
>>>>
>>>> INFO cinder.image.image_utils [req-032292d8-1500-474d-95c7-2e8424e2b864
>>>> e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf
>>>> 490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d
>>>> default] *Converted 10240.00 MB image at 32.22 MB/s*
>>>>
>>>>
>>>>
>>>> I used the following command to create the qcow2 file
>>>>
>>>> qemu-img create -f qcow2 custom.qcow2 10G
>>>>
>>>>
>>>>
>>>> What am I missing ?
>>>>
>>>>
>>>>
>>>> Thanks
>>>> Paras.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Paras pradhan
Also this is what I've noticed with the centos cloud image I downloaded. If
I add few packages( around a GB), the sizes goes up to 8GB from 1.3 GB.
Running dd if=/dev/zero of=temp;sync;rm temp failed due to the size of the
disk on the cloud images which is 10G. zeroing failed with No space left on
device.  Any other options?

Thanks
Paras.

On Tue, Aug 1, 2017 at 9:43 AM, Paras pradhan <pradhanpa...@gmail.com>
wrote:

> I ran virt-sparsify. So before running virt-sparsify it was 6.0GB and
> after it is 1.3GB.
>
> Thanks
> Paras.
>
> On Tue, Aug 1, 2017 at 2:53 AM, Tomáš Vondra <von...@homeatcloud.cz>
> wrote:
>
>> Hi!
>>
>> How big are the actual image files? Because qcow2 is a sparse format, it
>> does not store zeroes. If the free space in one image is zeroed out, it
>> will convert much faster. If that is the problem, use „dd if=/dev/zero
>> of=temp;sync;rm temp“ or zerofree.
>>
>> Tomas
>>
>>
>>
>> *From:* Paras pradhan [mailto:pradhanpa...@gmail.com]
>> *Sent:* Monday, July 31, 2017 11:54 PM
>> *To:* openstack-operators@lists.openstack.org
>> *Subject:* [Openstack-operators] custom build image is slow
>>
>>
>>
>> Hello
>>
>>
>>
>> I have two qcow2 images uploaded to glance. One is CentOS 7 cloud image
>> downloaded from centos.org.  The other one is custom built using CentOS
>> 7.DVD.  When I create cinder volumes from them, volume creation from the
>> custom built image it is very very slow.
>>
>>
>>
>>
>>
>> CenOS qcow2:
>>
>>
>>
>> 2017-07-31 21:42:44.287 881609 INFO cinder.image.image_utils
>> [req-ea2d7b12-ae9e-45b2-8b4b-ea8465497d5a e090e605170a778610438bfabad7aa
>> 7764d0a77ef520ae392e2b59074c9f88cf 490910c1d4e1486d8e3a62d7c0ae698e -
>> d67a18e70dd9467db25b74d33feaad6d default] *Converted 8192.00 MB image at
>> 253.19 MB/s*
>>
>>
>>
>> Custom built qcow2:
>>
>> INFO cinder.image.image_utils [req-032292d8-1500-474d-95c7-2e8424e2b864
>> e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf
>> 490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d
>> default] *Converted 10240.00 MB image at 32.22 MB/s*
>>
>>
>>
>> I used the following command to create the qcow2 file
>>
>> qemu-img create -f qcow2 custom.qcow2 10G
>>
>>
>>
>> What am I missing ?
>>
>>
>>
>> Thanks
>> Paras.
>>
>>
>>
>>
>>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Paras pradhan
I ran virt-sparsify. So before running virt-sparsify it was 6.0GB and after
it is 1.3GB.

Thanks
Paras.

On Tue, Aug 1, 2017 at 2:53 AM, Tomáš Vondra <von...@homeatcloud.cz> wrote:

> Hi!
>
> How big are the actual image files? Because qcow2 is a sparse format, it
> does not store zeroes. If the free space in one image is zeroed out, it
> will convert much faster. If that is the problem, use „dd if=/dev/zero
> of=temp;sync;rm temp“ or zerofree.
>
> Tomas
>
>
>
> *From:* Paras pradhan [mailto:pradhanpa...@gmail.com]
> *Sent:* Monday, July 31, 2017 11:54 PM
> *To:* openstack-operators@lists.openstack.org
> *Subject:* [Openstack-operators] custom build image is slow
>
>
>
> Hello
>
>
>
> I have two qcow2 images uploaded to glance. One is CentOS 7 cloud image
> downloaded from centos.org.  The other one is custom built using CentOS
> 7.DVD.  When I create cinder volumes from them, volume creation from the
> custom built image it is very very slow.
>
>
>
>
>
> CenOS qcow2:
>
>
>
> 2017-07-31 21:42:44.287 881609 INFO cinder.image.image_utils
> [req-ea2d7b12-ae9e-45b2-8b4b-ea8465497d5a e090e605170a778610438bfabad7aa
> 7764d0a77ef520ae392e2b59074c9f88cf 490910c1d4e1486d8e3a62d7c0ae698e -
> d67a18e70dd9467db25b74d33feaad6d default] *Converted 8192.00 MB image at
> 253.19 MB/s*
>
>
>
> Custom built qcow2:
>
> INFO cinder.image.image_utils [req-032292d8-1500-474d-95c7-2e8424e2b864
> e090e605170a778610438bfabad7aa7764d0a77ef520ae392e2b59074c9f88cf
> 490910c1d4e1486d8e3a62d7c0ae698e - d67a18e70dd9467db25b74d33feaad6d
> default] *Converted 10240.00 MB image at 32.22 MB/s*
>
>
>
> I used the following command to create the qcow2 file
>
> qemu-img create -f qcow2 custom.qcow2 10G
>
>
>
> What am I missing ?
>
>
>
> Thanks
> Paras.
>
>
>
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceilometer and disk IO

2017-04-13 Thread Paras pradhan
Thanks for the reply. I have see people recommending to use ceph as a
backed for gnocchi but we don't use Ceph yet and looks like gnocchi does
not support cinder backends. We use Dell EQ San for block devices.

-Paras.

On Tue, Apr 11, 2017 at 9:15 PM, Alex Hubner <a...@hubner.net.br> wrote:

> Ceilometer can be a pain in the a* if not properly configured/designed,
> especially when things start to grow. I've already saw the exact same
> situation you described on two different instalations. To make things more
> complicated, some OpenStack distributions use MongoDB as a storage backend
> and do not consider a dedicated infrastructure for Ceilometer, relegating
> this important service to live, by default, in the controller nodes...
> worst: not clearly agreeing on what should be done when the service starts
> to stall rather than simply adding more controller nodes... (yes Red Hat,
> I'm looking to you). You might consider using gnocchi and a ceph storage
> for telemetry as it was already suggested.
>
> For my 2 cents, here's a nice talk on the matter: https://www.openstack.
> org/videos/video/capacity-planning-saving-money-and-
> maximizing-efficiency-in-openstack-using-gnocchi-and-ceilometer
>
> []'s
> Hubner
>
> On Sat, Apr 8, 2017 at 2:00 PM, Paras pradhan <pradhanpa...@gmail.com>
> wrote:
>
>> Hello
>>
>> What kind of storage backend do you guys use if you see disk IO
>> bottlenecks when storing ceilometer events and metrics? In my current
>> configuration I am using 300 GB 10K SAS (in hardware raid 1) and iostat
>> report does not look good (upto 100% unilization) with ceilometer consuming
>> high CPU and Memory.  Does it help adding more spindles and move to raid 10?
>>
>> Thanks!
>> Paras.
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Fuel] Network Documentation

2015-10-30 Thread Paras pradhan
Thanks Guys for the link.  After looking at the architecture if you have a
floating ip attached to an instance it goes public without hitting the
network node. Is it true even if you don't assign public ips to your
compute nodes?

-Paras.

On Fri, Oct 30, 2015 at 8:33 AM, Alex Schultz <aschu...@mirantis.com> wrote:

> Hey Paras,
>
> On Thu, Oct 29, 2015 at 4:59 PM, Paras pradhan <pradhanpa...@gmail.com>
> wrote:
> > Hi,
> >
> > Is there any documentation or reference architecture how fuel 7 deploys
> DVR
> > and other networking components?
>
> This might be what you're looking for:
>
> https://docs.mirantis.com/openstack/fuel/fuel-7.0/reference-architecture.html#neutron-with-dvr
>
> >
> > Thanks
> > Paras.
> >
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
> -Alex
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Fuel] Network Documentation

2015-10-29 Thread Paras pradhan
Hi,

Is there any documentation or reference architecture how fuel 7 deploys DVR
and other networking components?

Thanks
Paras.
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] floatin ip issue

2014-11-03 Thread Paras pradhan
George,

Disabled nat on the compute node and now I can ping/ssh to the instance
using the floating. Do you see anything wrong the nat rules here
http://paste.openstack.org/show/128754/ ?

Thanks
Paras.

On Fri, Oct 31, 2014 at 6:20 AM, George Shuklin george.shuk...@gmail.com
wrote:

  I was wrong, sorry. Floatings assigned as /32 on external interface
 inside network namespace. The signle idea I have now - is try to remove all
 iptables with NAT (it's destructive up to moment of network node reboot or
 router delete/create), and check out if address will reply to ping.

 If 'yes' - means problems in routing/nat
 If 'no' - means problem are outside openstack router (external net,
 provider routing, etc).


 On 10/29/2014 06:23 PM, Paras pradhan wrote:

 Hi George,


  You mean .193 and .194 should be in the different subnets?
 192.168.122.193/24 reserved  from the allocation pool and
 192.168.122.194/32 is the floating ip.

  Here are the outputs for the commands


 *neutron port-list --device-id=8725dd16-8831-4a09-ae98-6c5342ea501f *


 +--+--+---++

 | id   | name | mac_address   |
 fixed_ips
 |


 +--+--+---++

 | 6f835de4-c15b-44b8-9002-160ff4870643 |  | fa:16:3e:85:dc:ee |
 {subnet_id: 0189699c-8ffc-44cb-aebc-054c8d6001ee, ip_address:
 192.168.122.193} |

 | be3c4294-5f16-45b6-8c21-44b35247d102 |  | fa:16:3e:72:ae:da |
 {subnet_id: d01a6522-063d-40ba-b4dc-5843177aab51, ip_address:
 10.10.0.1}   |


 +--+--+---++

  *neutron floatingip-list*


 +--+--+-+--+

 | id   | fixed_ip_address |
 floating_ip_address | port_id  |


 +--+--+-+--+

 | 55b00e9c-5b79-4553-956b-e342ae0a430a | 10.10.0.9|
 192.168.122.194 | 82bcbb91-827a-41aa-9dd9-cb7a4f8e7166 |


 +--+--+-+--+

  *neutron net-list*


 +--+--+---+

 | id   | name | subnets
 |


 +--+--+---+

 | dabc2c18-da64-467b-a2ba-373e460444a7 | demo-net |
 d01a6522-063d-40ba-b4dc-5843177aab51 10.10.0.0/24 |

 | ceaaf189-5b6f-4215-8686-fbdeae87c12d | ext-net  |
 0189699c-8ffc-44cb-aebc-054c8d6001ee 192.168.122.0/24 |


 +--+--+---+


  *neutron subnet-list*


 +--+-+--++

 | id   | name| cidr |
 allocation_pools   |


 +--+-+--++

 | d01a6522-063d-40ba-b4dc-5843177aab51 | demo-subnet | 10.10.0.0/24 |
 {start: 10.10.0.2, end: 10.10.0.254}   |

 | 0189699c-8ffc-44cb-aebc-054c8d6001ee | ext-subnet  | 192.168.122.0/24 |
 {start: 192.168.122.193, end: 192.168.122.222} |


 +--+-+--++


  P.S: External subnet is 192.168.122.0/24 and internal vm instance's
 subnet is 10.10.0.0/24


  Thanks

 Paras.

 On Mon, Oct 27, 2014 at 5:51 PM, George Shuklin george.shuk...@gmail.com
 wrote:


 I don't like this:

 15: qg-d351f21a-08: BROADCAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
 UNKNOWN group default
 inet 192.168.122.193/24 brd 192.168.122.255 scope global
 qg-d351f21a-08
valid_lft forever preferred_lft forever
 inet 192.168.122.194/32 brd 192.168.122.194 scope global
 qg-d351f21a-08
valid_lft forever preferred_lft forever

  Why you got two IPs on same interface with different netmasks?

 I just rechecked it on our installations - it should not be happens.

 Next: or this is a bug, or this is uncleaned network node (lesser bug),
 or someone messing with neutron.

 Starts from neutron:

 show ports for router:

 neutron port-list --device-id=router-uuid-here
 neutron

Re: [Openstack-operators] floatin ip issue

2014-11-03 Thread Paras pradhan
George,

My issue has been resolved. The conflict was created by the local virbr0
interface's nat rules. After removing this interface from my compute node
all looks good now.

Thanks for your help
Paras.


On Mon, Nov 3, 2014 at 10:01 AM, Paras pradhan pradhanpa...@gmail.com
wrote:

 George,

 Disabled nat on the compute node and now I can ping/ssh to the instance
 using the floating. Do you see anything wrong the nat rules here
 http://paste.openstack.org/show/128754/ ?

 Thanks
 Paras.

 On Fri, Oct 31, 2014 at 6:20 AM, George Shuklin george.shuk...@gmail.com
 wrote:

  I was wrong, sorry. Floatings assigned as /32 on external interface
 inside network namespace. The signle idea I have now - is try to remove all
 iptables with NAT (it's destructive up to moment of network node reboot or
 router delete/create), and check out if address will reply to ping.

 If 'yes' - means problems in routing/nat
 If 'no' - means problem are outside openstack router (external net,
 provider routing, etc).


 On 10/29/2014 06:23 PM, Paras pradhan wrote:

 Hi George,


  You mean .193 and .194 should be in the different subnets?
 192.168.122.193/24 reserved  from the allocation pool and
 192.168.122.194/32 is the floating ip.

  Here are the outputs for the commands


 *neutron port-list --device-id=8725dd16-8831-4a09-ae98-6c5342ea501f *


 +--+--+---++

 | id   | name | mac_address   |
 fixed_ips
 |


 +--+--+---++

 | 6f835de4-c15b-44b8-9002-160ff4870643 |  | fa:16:3e:85:dc:ee |
 {subnet_id: 0189699c-8ffc-44cb-aebc-054c8d6001ee, ip_address:
 192.168.122.193} |

 | be3c4294-5f16-45b6-8c21-44b35247d102 |  | fa:16:3e:72:ae:da |
 {subnet_id: d01a6522-063d-40ba-b4dc-5843177aab51, ip_address:
 10.10.0.1}   |


 +--+--+---++

  *neutron floatingip-list*


 +--+--+-+--+

 | id   | fixed_ip_address |
 floating_ip_address | port_id  |


 +--+--+-+--+

 | 55b00e9c-5b79-4553-956b-e342ae0a430a | 10.10.0.9|
 192.168.122.194 | 82bcbb91-827a-41aa-9dd9-cb7a4f8e7166 |


 +--+--+-+--+

  *neutron net-list*


 +--+--+---+

 | id   | name | subnets
 |


 +--+--+---+

 | dabc2c18-da64-467b-a2ba-373e460444a7 | demo-net |
 d01a6522-063d-40ba-b4dc-5843177aab51 10.10.0.0/24 |

 | ceaaf189-5b6f-4215-8686-fbdeae87c12d | ext-net  |
 0189699c-8ffc-44cb-aebc-054c8d6001ee 192.168.122.0/24 |


 +--+--+---+


  *neutron subnet-list*


 +--+-+--++

 | id   | name| cidr |
 allocation_pools   |


 +--+-+--++

 | d01a6522-063d-40ba-b4dc-5843177aab51 | demo-subnet | 10.10.0.0/24
 | {start: 10.10.0.2, end: 10.10.0.254}   |

 | 0189699c-8ffc-44cb-aebc-054c8d6001ee | ext-subnet  | 192.168.122.0/24
 | {start: 192.168.122.193, end: 192.168.122.222} |


 +--+-+--++


  P.S: External subnet is 192.168.122.0/24 and internal vm instance's
 subnet is 10.10.0.0/24


  Thanks

 Paras.

 On Mon, Oct 27, 2014 at 5:51 PM, George Shuklin george.shuk...@gmail.com
  wrote:


 I don't like this:

 15: qg-d351f21a-08: BROADCAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
 UNKNOWN group default
 inet 192.168.122.193/24 brd 192.168.122.255 scope global
 qg-d351f21a-08
valid_lft forever preferred_lft forever
 inet 192.168.122.194/32 brd 192.168.122.194 scope global
 qg-d351f21a-08
valid_lft forever preferred_lft forever

  Why you got two IPs on same interface with different netmasks?

 I

Re: [Openstack-operators] floatin ip issue

2014-10-27 Thread Paras pradhan
*Yes it got its ip which is 192.168.122.194 in the paste below.*

--

root@juno2:~# ip netns exec qrouter-34f3b828-b7b8-4f44-b430-14d9c5bd0d0c ip
-4 a

1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN group
default

inet 127.0.0.1/8 scope host lo

   valid_lft forever preferred_lft forever

14: qr-ac50d700-29: BROADCAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UNKNOWN group default

inet 50.50.50.1/24 brd 50.50.50.255 scope global qr-ac50d700-29

   valid_lft forever preferred_lft forever

15: qg-d351f21a-08: BROADCAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UNKNOWN group default

inet 192.168.122.193/24 brd 192.168.122.255 scope global qg-d351f21a-08

   valid_lft forever preferred_lft forever

inet 192.168.122.194/32 brd 192.168.122.194 scope global qg-d351f21a-08

   valid_lft forever preferred_lft forever

---


*stdbuf -e0 -o0 ip net exec qrouter... /bin/bash give me the following*


--


root@juno2:~# ifconfig

loLink encap:Local Loopback

  inet addr:127.0.0.1  Mask:255.0.0.0

  inet6 addr: ::1/128 Scope:Host

  UP LOOPBACK RUNNING  MTU:65536  Metric:1

  RX packets:2 errors:0 dropped:0 overruns:0 frame:0

  TX packets:2 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:0

  RX bytes:168 (168.0 B)  TX bytes:168 (168.0 B)


qg-d351f21a-08 Link encap:Ethernet  HWaddr fa:16:3e:79:0f:a2

  inet addr:192.168.122.193  Bcast:192.168.122.255
Mask:255.255.255.0

  inet6 addr: fe80::f816:3eff:fe79:fa2/64 Scope:Link

  UP BROADCAST RUNNING  MTU:1500  Metric:1

  RX packets:2673 errors:0 dropped:0 overruns:0 frame:0

  TX packets:112 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:0

  RX bytes:205377 (205.3 KB)  TX bytes:6537 (6.5 KB)


qr-ac50d700-29 Link encap:Ethernet  HWaddr fa:16:3e:7e:6d:f3

  inet addr:50.50.50.1  Bcast:50.50.50.255  Mask:255.255.255.0

  inet6 addr: fe80::f816:3eff:fe7e:6df3/64 Scope:Link

  UP BROADCAST RUNNING  MTU:1500  Metric:1

  RX packets:345 errors:0 dropped:0 overruns:0 frame:0

  TX packets:1719 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:0

  RX bytes:27377 (27.3 KB)  TX bytes:164541 (164.5 KB)

--


Thanks

Paras.



On Sat, Oct 25, 2014 at 3:18 AM, George Shuklin george.shuk...@gmail.com
wrote:

  Check out if qrouter got floating inside network namespace  (ip net exec
 qrouter... ip -4 a), or just bash in to it (stdbuf -e0 -o0 ip net exec
 qrouter... /bin/bash) and play with it like with normal server.



 On 10/24/2014 07:38 PM, Paras pradhan wrote:

 Hello,

  Assigned a floating ip to an instance. But I can't ping the instance.
 This instance can reach internet with no problem. But I can't ssh or icmp
 to this instance. Its not a security group issue.

  On my network node that runs l3, I can see qrouter. The extenel subnet
 looks like this:

  allocation-pool start=192.168.122.193,end=192.168.122.222 --disable-dhcp
 --gateway 192.168.122.1 192.168.122.0/24

 I can ping 192.168.122.193 using: ip netns exec
 qrouter-34f3b828-b7b8-4f44-b430-14d9c5bd0d0c ping 192.168.122.193

 but not 192.168.122.194 (which is the floating ip)

 Doing tcp dump on the interace that connects to the external world, I can
 see ICMP request but not reply from the interface :


  11:36:40.360255 IP 192.168.122.1  192.168.122.194: ICMP echo request,
 id 2589, seq 312, length 64

  11:36:41.360222 IP 192.168.122.1  192.168.122.194: ICMP echo request,
 id 2589, seq 313, length 64


  Ideas?

 Thanks

 Paras.


 ___
 OpenStack-operators mailing 
 listOpenStack-operators@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators