[Openstack] Packstack different ethernet device names

2018-09-26 Thread Danny Marc Rotscher
Dear all,

 

it is possible to address multible network device names in the answer file
for example for the tunnel interface?

Because my controller run on a vm and has the device name eth* and my
hypervisor hosts have something like enp*, which is the new one.

I know I can switch back to eth* for the hypervisor hosts, but that is only
the last way I would prefer.

 

Kind regards,

Danny

 



smime.p7s
Description: S/MIME cryptographic signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Create a PNDA on openstack

2018-09-26 Thread Suma Gowda
i need to create a PNDA on openstack..
1. i installed openstack on devstack in vitualbox. ..after that what i have
to do.. send me the screenshot.. i am beginner to this. What is
cli,heat..etc..please and one example
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [OpenStack][Neutron][SFC] Regarding SFC support on provider VLAN N/W

2018-09-26 Thread Amit Kumar
Hi All,

We are using Ocata release and we have installed networking-sfc for Service
Function Chaining functionality. Installation was successful and then we
tried to create port pairs on VLAN N/W and it failed. We tried creating
port-pairs on VXLAN based N/W and it worked. So, is it that SFC
functionality is supported only on VXLAN based N/Ws?

Regards,
Amit
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Create a PNDA on openstack

2018-09-26 Thread Eugen Block

Hi,

to be honest, I think if you are supposed to work with PNDA and  
OpenStack is your platform, you should at least get an overview what  
components it has and what they are for. No example or screenshot will  
help you understand how those components interact.
The PNDA pages mention OpenStack Mitaka, so you should study the  
respective docs [1]. Different guides for installation, operations and  
administration are available for different platforms (Ubuntu,  
openSUSE/SLES and RedHat/CentOS), pick the one suitable for your  
environment and understand the basics (creating tenants and users,   
neutron networking, nova compute etc.).


In addition to the mandatory services (neutron, nova, glance, cinder)  
you'll need Swift (object storage) and Heat (orchestration).
Maybe you already get an idea, this is not covered with "send me a  
screenshot". ;-)
Another mandatory requirement is a basic understanding of the command  
line interface (CLI), this enables you to access and manage your  
openstack cloud.


A quick search will also reveal some tutorials or videos [3] about the  
basic concepts.


Regards,
Eugen


[1] https://docs.openstack.org/mitaka/
[2] https://docs.openstack.org/mitaka/cli-reference/
[3] https://opensource.com/business/14/2/openstack-beginners-guide

Zitat von Suma Gowda :


i need to create a PNDA on openstack..
1. i installed openstack on devstack in vitualbox. ..after that what i have
to do.. send me the screenshot.. i am beginner to this. What is
cli,heat..etc..please and one example





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] [kolla] ceph osd deploy fails

2018-09-26 Thread Eduardo Gonzalez
CC openstack so others can see the thread

El mié., 26 sept. 2018 a las 15:44, Eduardo Gonzalez ()
escribió:

> Hi, i'm not sure at this moment at what your issue may be. Using external
> ceph with kolla-ansible is supported.
> Just to make sure, rocky is not released yet in kolla/-ansible, only a
> release candidate and a proposal for release candidate 2 this week.
>
> To dig more into your issue, what are your config? Anything out of the box
> in the servers? What steps was made to define the osd disks?
>
> Regards
>
> El mié., 26 sept. 2018 a las 15:08, Florian Engelmann (<
> florian.engelm...@everyware.ch>) escribió:
>
>> Dear Eduardo,
>>
>> thank you for your fast response! I recognized those fixes and we are
>> using stable/rocky from yesterday because of those commits (using the
>> tarballs - not the git repository).
>>
>> I guess you are talking about:
>>
>> https://github.com/openstack/kolla-ansible/commit/ef6921e6d7a0922f68ffb05bd022aab7c2882473
>>
>> I saw that one in kolla as well:
>>
>> https://github.com/openstack/kolla/commit/60f0ea10bfdff12d847d9cb3b51ce02ffe96d6e1
>>
>> So we are using Ceph 12.2.4 right now and everything up to 24th of
>> september in stable/rocky.
>>
>> Anything else we could test/change?
>>
>> We are at the point to deploy ceph seperated from kolla (using
>> ceph-ansible) because we need a working environment tomorrow. Do you see
>> a real chance get ceph via kolla-ansible up and running today?
>>
>>
>> All the best,
>> Flo
>>
>>
>>
>>
>> Am 26.09.18 um 14:44 schrieb Eduardo Gonzalez:
>> > Hi, what version of rocky are you using. Maybe was in the middle of a
>> > backport which temporally broke ceph.
>> >
>> > Could you try latest stable/rocky branch?
>> >
>> > It is now working properly.
>> >
>> > Regards
>> >
>> > On Wed, Sep 26, 2018, 2:32 PM Florian Engelmann
>> > mailto:florian.engelm...@everyware.ch>>
>>
>> > wrote:
>> >
>> > Hi,
>> >
>> > I tried to deploy Rocky in a multinode setup but ceph-osd fails
>> with:
>> >
>> >
>> > failed: [xxx-poc2] (item=[0, {u'fs_uuid': u'',
>> u'bs_wal_label':
>> > u'', u'external_journal': False, u'bs_blk_label': u'',
>> > u'bs_db_partition_num': u'', u'journal_device': u'', u'journal':
>> u'',
>> > u'partition': u'/dev/nvme0n1', u'bs_wal_partition_num': u'',
>> > u'fs_label': u'', u'journal_num': 0, u'bs_wal_device': u'',
>> > u'partition_num': u'1', u'bs_db_label': u'',
>> u'bs_blk_partition_num':
>> > u'', u'device': u'/dev/nvme0n1', u'bs_db_device': u'',
>> > u'partition_label': u'KOLLA_CEPH_OSD_BOOTSTRAP_BS',
>> u'bs_blk_device':
>> > u''}]) => {
>> >   "changed": true,
>> >   "item": [
>> >   0,
>> >   {
>> >   "bs_blk_device": "",
>> >   "bs_blk_label": "",
>> >   "bs_blk_partition_num": "",
>> >   "bs_db_device": "",
>> >   "bs_db_label": "",
>> >   "bs_db_partition_num": "",
>> >   "bs_wal_device": "",
>> >   "bs_wal_label": "",
>> >   "bs_wal_partition_num": "",
>> >   "device": "/dev/nvme0n1",
>> >   "external_journal": false,
>> >   "fs_label": "",
>> >   "fs_uuid": "",
>> >   "journal": "",
>> >   "journal_device": "",
>> >   "journal_num": 0,
>> >   "partition": "/dev/nvme0n1",
>> >   "partition_label": "KOLLA_CEPH_OSD_BOOTSTRAP_BS",
>> >   "partition_num": "1"
>> >   }
>> >   ]
>> > }
>> >
>> > MSG:
>> >
>> > Container exited with non-zero return code 2
>> >
>> > We tried to debug the error message by starting the container with a
>> > modified endpoint but we are stuck at the following point right now:
>> >
>> >
>> > docker run  -e "HOSTNAME=10.0.153.11" -e "JOURNAL_DEV=" -e
>> > "JOURNAL_PARTITION=" -e "JOURNAL_PARTITION_NUM=0" -e
>> > "KOLLA_BOOTSTRAP=null" -e "KOLLA_CONFIG_STRATEGY=COPY_ALWAYS" -e
>> > "KOLLA_SERVICE_NAME=bootstrap-osd-0" -e "OSD_BS_BLK_DEV=" -e
>> > "OSD_BS_BLK_LABEL=" -e "OSD_BS_BLK_PARTNUM=" -e "OSD_BS_DB_DEV=" -e
>> > "OSD_BS_DB_LABEL=" -e "OSD_BS_DB_PARTNUM=" -e
>> "OSD_BS_DEV=/dev/nvme0n1"
>> > -e "OSD_BS_LABEL=KOLLA_CEPH_OSD_BOOTSTRAP_BS" -e "OSD_BS_PARTNUM=1"
>> -e
>> > "OSD_BS_WAL_DEV=" -e "OSD_BS_WAL_LABEL=" -e "OSD_BS_WAL_PARTNUM=" -e
>> > "OSD_DEV=/dev/nvme0n1" -e "OSD_FILESYSTEM=xfs" -e
>> > "OSD_INITIAL_WEIGHT=1"
>> > -e "OSD_PARTITION=/dev/nvme0n1" -e "OSD_PARTITION_NUM=1" -e
>> > "OSD_STORETYPE=bluestore" -e "USE_EXTERNAL_JOURNAL=false"   -v
>> > "/etc/kolla//ceph-osd/:/var/lib/kolla/config_files/:ro" -v
>> > "/etc/localtime:/etc/localtime:ro" -v "/dev/:/dev/" -v
>> > "kolla_logs:/var/log/kolla/" -ti --privileged=true --entrypoint
>> > /bin/bash
>> >
>> 10.0.128.7:5000/openstack/openstack-ko

Re: [Openstack] Packstack different ethernet device names

2018-09-26 Thread Remo Mattei
Yes packstack has a section to map the nic. 

I will have to check my old old config on my computer then share it. 

Remo

> Il giorno 26 set 2018, alle ore 02:56, Danny Marc Rotscher 
>  ha scritto:
> 
> Dear all,
>  
> it is possible to address multible network device names in the answer file 
> for example for the tunnel interface?
> Because my controller run on a vm and has the device name eth* and my 
> hypervisor hosts have something like enp*, which is the new one.
> I know I can switch back to eth* for the hypervisor hosts, but that is only 
> the last way I would prefer.
>  
> Kind regards,
> Danny
>  
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Can any user add or delete OpenStack Swift middleware?

2018-09-26 Thread Qiao Kang
Kota,

Sorry for the late response, see more below:

On Fri, Sep 21, 2018 at 2:59 AM Kota TSUYUZAKI
 wrote:
>
> Hi Qiao,
>
> > Thanks! I'm interested and would like to join, as well as contribute!
> >
>
> One example, that is how the multi-READ works, is around [1], the storlets 
> middleware can make a subrequest against to the backend Swift then,
> attach the request input to the application in the Docker container by 
> passing the file descriptor to be readable[2][3][4]*.
> After all of the prepartion for the invocation, the input descriptors will be 
> readable in the storlet app as the InputFile.
>
> * At first, the runtime prepares the extra source stub at [2], then creates a 
> set of pipes for each sources to be communicated with the app
> inside the docker daemon[3], then, the runtime module reads the extra data 
> from Swift GET and flushes all buffers into the descriptor [4].
>
> 1: 
> https://github.com/openstack/storlets/blob/master/storlets/swift_middleware/handlers/proxy.py#L294-L305
> 2: 
> https://github.com/openstack/storlets/blob/master/storlets/gateway/gateways/docker/runtime.py#L571-L581
> 3: 
> https://github.com/openstack/storlets/blob/master/storlets/gateway/gateways/docker/runtime.py#L665-L666
> 4: 
> https://github.com/openstack/storlets/blob/master/storlets/gateway/gateways/docker/runtime.py#L833-L840
>
> Following the mechanism, IMO, what we can do to enable multi-out is
>
> - add the capability to create some PUT subrequest at swift_middleware module 
> (define the new API header too)
> - create the extra communication write-able fds in the storlets runtime 
> (perhaps, storlets daemon is also needed to be changed)
> - pass all data from the write-able fds as to the sub PUT request input
>
>
> If you have any nice idea rather than me, it's always welcome tho :)

I think your approach is clear and straightforward. One quick question:
> - create the extra communication write-able fds in the storlets runtime 
> (perhaps, storlets daemon is also needed to be changed)
So the Storlet app will write to those fds? Are these fds temporary
and need to be destroyed after PUT requests in step 3?

>
> > Another potential use case: imagine I want to compress objects upon
> > PUTs using two different algorithms X and Y, and use the future
> > 'multi-write' feature to store three objects upon any single PUT
> > (original copy, X-compressed copy and Y-compressed copy). I can
> > install two Storlets which implement X and Y respectively. However,
> > seems Storlets engine can only invoke one per PUT, so this is still
> > not feasible. Is that correct?
> >
>
> It sounds interesting. As you know, yes, one Storlet application can be 
> invoked per one PUT.
> On the other hand, Storlets has been capable to run several applications as 
> you want.
> One idea using the capability, if you develop an application like:
>
> - Storlet app invokes several multi threads with their output descriptor
> - Storlet app reads the input stream, then pushes the data into the threads
> - Each thread performs as you want (one does as X compression, the other does 
> as Y compression)
>   then, writes its own result to the output descriptor
>
> It might work for your use case.

Sounds great, I guess it should work as well.

I'm also concerned with "performance isolation" in Storlets. For
instance, is it possible for a user to launch several very
heavy-loaded Storlets apps to consume lots of CPU/memory resources to
affect other users? Does Storlets do performance/resource isolation?

Thanks,
Qiao

>
> Thanks,
> Kota
>
>
> (2018/09/19 5:52), Qiao Kang wrote:
> > Dear Kota,
> >
> > On Mon, Sep 17, 2018 at 11:43 PM Kota TSUYUZAKI
> >  wrote:
> >>
> >> Hi Quio,
> >>
> >>> I know Storlets can provide user-defined computation functionalities,
> >>> but I guess some capabilities can only be achieved using middleware.
> >>> For example, a user may want such a feature: upon each PUT request, it
> >>> creates a compressed copy of the object and stores both the original
> >>> copy and compressed copy. It's feasible using middlware but I don't
> >>> think Storlets provide such capability.
> >>
> >> Interesting, exactly currently it's not supported to write to multi 
> >> objects for a PUT request but as well as other middlewares we could adopt 
> >> the feasibility into Storlets if you prefer.
> >> Right now, the multi read (i.e. GET from multi sources) is only available 
> >> and I think we would be able to expand the logic to PUT requests too. 
> >> IIRC, in those days, we had discussion on sort of the
> >> multi-out use cases and I'm sure the data structure inside Storlets are 
> >> designed to be capable to that expantion. At that time, we called them 
> >> "Tee" application on Storlets, I could not find the
> >> historical discussion logs about how to implement tho, sorry. I believe 
> >> that would be an use case for storlets if you prefer the user-defined 
> >> application flexibilities rather than operator defined
> >> Swi