[opnfv-tech-discuss] [OPNFV] Testing group weekly meeting

2017-07-10 Thread morgan.richomme

Hi

reminder: this week => APAC meeting on Wednesday

we keep however the Thursday slot for an adhoc meeting with Bitergia

see agenda 
https://wiki.opnfv.org/display/meetings/Test+Working+Group+Weekly+Meeting


/Morgan


_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] reply: [dovetail] weekly meeting agenda

2017-07-10 Thread Tianhongbo
Hi Trevor:

Thanks for your effort. That helps a lot.

I will put that on the agenda.

Best regards

hongbo

发件人: opnfv-tech-discuss-boun...@lists.opnfv.org 
[mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] 代表 Cooper, Trevor
发送时间: 2017年7月11日 6:53
收件人: Wenjing Chu; TECH-DISCUSS OPNFV
主题: Re: [opnfv-tech-discuss] [dovetail] weekly meeting agenda

I have made a list of our recent discussion topics and things we are actively 
working on to make sure we are not neglecting or losing topics. If agreed we 
can use this to prioritize weekly meeting agenda and track things that need 
attention but are not currently progressing. Please review and add what I have 
missed. Do you think this is useful to update weekly?

https://wiki.opnfv.org/display/dovetail/Open+Topics+for+Dovetail

/Trevor



From: 
opnfv-tech-discuss-boun...@lists.opnfv.org
 [mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] On Behalf Of Wenjing Chu
Sent: Wednesday, June 28, 2017 7:59 PM
To: TECH-DISCUSS OPNFV 
mailto:opnfv-tech-discuss@lists.opnfv.org>>
Subject: [opnfv-tech-discuss] [dovetail] weekly meeting agenda

Hi Dovetailers

I propose we cover the most urgent topics this week:

-  Review the feedbacks from tsc members

-  Examine remaining work items required for first release and decide 
how to close them

-  Quick new status updates
For the first two topics, can everyone do homework ahead of time so we can hope 
to actually produce a good list?
Any other suggestions? Would also be good to share time availability info 
during the summer months when it happens to be crunch time for us.

Thanks!
Wenjing
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [release][danube] Danube 3 release on July 14

2017-07-10 Thread David McBride
Reminder...

On Thu, Jul 6, 2017 at 5:20 PM, David McBride 
wrote:

> Reminder... as approved by the TSC last week, Danube 3 will be released 1
> week from tomorrow on July 14.  The schedule is as follows:
>
>- July 12 - complete testing
>- July 13 - finish document updates / update JIRA
>- July 14 - tag repos and release
>- Week of July 17 - download page goes live
>
> Let me know if you have questions.
>
> David
>
> --
> *David McBride*
> Release Manager, OPNFV
> Mobile: +1.805.276.8018
> Email/Google Talk: dmcbr...@linuxfoundation.org
> Skype: davidjmcbride1
> IRC: dmcbride
>



-- 
*David McBride*
Release Manager, OPNFV
Mobile: +1.805.276.8018
Email/Google Talk: dmcbr...@linuxfoundation.org
Skype: davidjmcbride1
IRC: dmcbride
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [dovetail] Use-cases for Dovetail Danube Release

2017-07-10 Thread Cooper, Trevor
My notes on the "use-case approach" with VOLTE and vCPE ... capture that EUAG 
supports the approach and will get back with some suggestions. Are we still 
waiting on the EUAG to get back? If yes what are our expectations for receiving 
guidance? Can we try to define the "use-case approach" better ... e.g. meld 
this with our current approach to say identify gaps/deficiencies with the test 
cases? What approach should we take for identifying and organizing VOLTE and 
vCPE capabilities and their relevant test-cases?

/Trevor
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [dovetail] weekly meeting agenda

2017-07-10 Thread Cooper, Trevor
I have made a list of our recent discussion topics and things we are actively 
working on to make sure we are not neglecting or losing topics. If agreed we 
can use this to prioritize weekly meeting agenda and track things that need 
attention but are not currently progressing. Please review and add what I have 
missed. Do you think this is useful to update weekly?

https://wiki.opnfv.org/display/dovetail/Open+Topics+for+Dovetail

/Trevor



From: opnfv-tech-discuss-boun...@lists.opnfv.org 
[mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] On Behalf Of Wenjing Chu
Sent: Wednesday, June 28, 2017 7:59 PM
To: TECH-DISCUSS OPNFV 
Subject: [opnfv-tech-discuss] [dovetail] weekly meeting agenda

Hi Dovetailers

I propose we cover the most urgent topics this week:

-  Review the feedbacks from tsc members

-  Examine remaining work items required for first release and decide 
how to close them

-  Quick new status updates
For the first two topics, can everyone do homework ahead of time so we can hope 
to actually produce a good list?
Any other suggestions? Would also be good to share time availability info 
during the summer months when it happens to be crunch time for us.

Thanks!
Wenjing
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [test-wg] docker container versioning

2017-07-10 Thread Brattain, Ross B
yardstick danube.3.0 was a mistaken premature tag before the release was 
postponed,  we can't delete git tags, so new git tag will be danube.3.1 with 
docker tag danube.3.1

docker danube.3.0 image was custom build for Dovetail.   We have projects 
consuming other project's dockers, so there are dependencies that way.

We are also going to start git cloning storperf inside yardstick docker 
container, so we will have to track two versions, yardstick git tag and 
storperf git tag.


From: test-wg-boun...@lists.opnfv.org [mailto:test-wg-boun...@lists.opnfv.org] 
On Behalf Of Fatih Degirmenci
Sent: Monday, July 10, 2017 2:04 PM
To: Alec Hothan (ahothan) ; Beierl, Mark 

Cc: test...@lists.opnfv.org; opnfv-tech-discuss@lists.opnfv.org
Subject: Re: [test-wg] docker container versioning

Hi Alec,

Your understanding about the docker image tags seem correct to me (latest vs 
stable) but I let someone from test projects answer to that.

When it comes to artifact versioning in general; you are asking really good 
questions. Let me go back in time and summarize what plans we had (ie what we 
haven't been able to implement fully) with regards to it.

The questions you ask about tagging docker images is not limited to them. We 
have similar issues with other artifacts we produce (rpms, isos , etc.), maybe 
not on the same level as the docker images but we have them.

In order to achieve some level of traceability and reproducibility, we record 
the metadata for the artifacts (rpms, isos, etc.) we build so we can go back to 
source and find out exact version (commit) that was used for building the 
artifact in question. [1]
We also had plans to tag corresponding commits in our git repos but we haven't 
managed to fix that. [2] This includes docker images as well.

Apart from our own (OPNFV) repos, some of the artifacts we build include stuff 
from other sources, making it tricky to achieve full traceability and making 
the traceability even more important.
We had many discussions about how to capture this information in order to 
ensure we can go back to a specific commit in any upstream project we consume. 
(locking/pinning versions etc.)
But since we have different ways of doing things and different practices 
employed by different projects, this hasn't happened either. (I can talk about 
this for hours...)

By the way, I am not saying we totally failed as some projects take care of 
this themselves but as OPNFV, we do not have common practice, except metadata 
files for ISOs and the docker tags that do not help at all.

Long story short, this can be achieved in different ways like you exemplified; 
if a tag is applied to a repo, we trigger a build automatically and store & tag 
produced artifact in artifact repo and/or if we are building periodically, we 
apply the tag to git repo once the artifact is built successfully.

No matter which way we go, we need to fix this so thank you for questioning 
things, hopefully resulting in improvements starting with test projects.

[1] http://artifacts.opnfv.org/apex/opnfv-2017-07-05.properties
[2] https://jira.opnfv.org/browse/RELENG-77

/Fatih

From: "Alec Hothan (ahothan)" mailto:ahot...@cisco.com>>
Date: Monday, 10 July 2017 at 21:45
To: Fatih Degirmenci 
mailto:fatih.degirme...@ericsson.com>>, "Beierl, 
Mark" mailto:mark.bei...@dell.com>>
Cc: 
"opnfv-tech-discuss@lists.opnfv.org" 
mailto:opnfv-tech-discuss@lists.opnfv.org>>,
 "test...@lists.opnfv.org" 
mailto:test...@lists.opnfv.org>>
Subject: [test-wg] docker container versioning


[ cc test-wg - was [opnfv-tech-discuss] Multiple docker containers from one 
project) ]


Hi Fatih

It is generally not easy to deal with container tags that do not include any 
information that links easily to a git repo commit (e.g. a “version” number 
increased by 1 for each build does not tell which git commit was used – might 
be one reason why this was removed)

For example, if we look at the published yardstick containers as of today:

“latest” is generally used to refer to the latest on master at the time of the 
build (so whoever does not care about the exact version and just wants the 
bleeding edge will pick “latest”) – apparently this container is not linked to 
any particular OPNFV release? Or is it implicitly linked to the current latest 
release (Euphrates)?

“stable” is supposed to be the latest stable version (presumably more stable 
than latest), in the current script it is not clear in what conditions a build 
is triggered with BRANCH not master and RELEASE_VERSION is not set (does the 
project owner controls that?) Apparently unrelated to any particular OPNFV 
release as well although would have thought a “danube.3.0-stable” would make 
sense.

“danube.3.0”: related to Danube 3.0 but does not indicate what yardstick repo 
tag it was built from. The git repo has a git tag with the same name 
“danube.3.0”, how are those 2 tags correlated? For example, t

Re: [opnfv-tech-discuss] [test-wg] docker container versioning

2017-07-10 Thread Fatih Degirmenci
Hi Alec,

Your understanding about the docker image tags seem correct to me (latest vs 
stable) but I let someone from test projects answer to that.

When it comes to artifact versioning in general; you are asking really good 
questions. Let me go back in time and summarize what plans we had (ie what we 
haven't been able to implement fully) with regards to it.

The questions you ask about tagging docker images is not limited to them. We 
have similar issues with other artifacts we produce (rpms, isos , etc.), maybe 
not on the same level as the docker images but we have them.
In order to achieve some level of traceability and reproducibility, we record 
the metadata for the artifacts (rpms, isos, etc.) we build so we can go back to 
source and find out exact version (commit) that was used for building the 
artifact in question. [1]
We also had plans to tag corresponding commits in our git repos but we haven't 
managed to fix that. [2] This includes docker images as well.

Apart from our own (OPNFV) repos, some of the artifacts we build include stuff 
from other sources, making it tricky to achieve full traceability and making 
the traceability even more important.
We had many discussions about how to capture this information in order to 
ensure we can go back to a specific commit in any upstream project we consume. 
(locking/pinning versions etc.)
But since we have different ways of doing things and different practices 
employed by different projects, this hasn't happened either. (I can talk about 
this for hours...)

By the way, I am not saying we totally failed as some projects take care of 
this themselves but as OPNFV, we do not have common practice, except metadata 
files for ISOs and the docker tags that do not help at all.

Long story short, this can be achieved in different ways like you exemplified; 
if a tag is applied to a repo, we trigger a build automatically and store & tag 
produced artifact in artifact repo and/or if we are building periodically, we 
apply the tag to git repo once the artifact is built successfully.

No matter which way we go, we need to fix this so thank you for questioning 
things, hopefully resulting in improvements starting with test projects.

[1] http://artifacts.opnfv.org/apex/opnfv-2017-07-05.properties
[2] https://jira.opnfv.org/browse/RELENG-77

/Fatih

From: "Alec Hothan (ahothan)" 
Date: Monday, 10 July 2017 at 21:45
To: Fatih Degirmenci , "Beierl, Mark" 

Cc: "opnfv-tech-discuss@lists.opnfv.org" , 
"test...@lists.opnfv.org" 
Subject: [test-wg] docker container versioning


[ cc test-wg - was [opnfv-tech-discuss] Multiple docker containers from one 
project) ]


Hi Fatih

It is generally not easy to deal with container tags that do not include any 
information that links easily to a git repo commit (e.g. a “version” number 
increased by 1 for each build does not tell which git commit was used – might 
be one reason why this was removed)

For example, if we look at the published yardstick containers as of today:

“latest” is generally used to refer to the latest on master at the time of the 
build (so whoever does not care about the exact version and just wants the 
bleeding edge will pick “latest”) – apparently this container is not linked to 
any particular OPNFV release? Or is it implicitly linked to the current latest 
release (Euphrates)?

“stable” is supposed to be the latest stable version (presumably more stable 
than latest), in the current script it is not clear in what conditions a build 
is triggered with BRANCH not master and RELEASE_VERSION is not set (does the 
project owner controls that?) Apparently unrelated to any particular OPNFV 
release as well although would have thought a “danube.3.0-stable” would make 
sense.

“danube.3.0”: related to Danube 3.0 but does not indicate what yardstick repo 
tag it was built from. The git repo has a git tag with the same name 
“danube.3.0”, how are those 2 tags correlated? For example, there is no 
matching git tag for container “colorado.0.15” and there is no matching 
container for git tag “colorado.3.0”.
Also not clear what yardstick project will do to publish a newer version of the 
yardstick container for Danube 3.0?

Project owners should be able to publish finer grain versions of containers in 
a faster pace than the overall OPNFV release (e.g. as frequent as more than 
once a day) and these need to be tracked properly.

The best practice – as seen by most popular container images – is to tag the 
container using a version string that reflects the container source code 
version.
Translating to a project workflow, what is typical:

  *   The project repo uses git tags to version the source code (e.g. “3.2.5”) 
independently of the OPNFV release versioning (e.g. “Danube 3.0”). Such 
versioning should be left at the discretion of the project owners (e.g. Many 
OpenStack projects use the pbr library to take care of component version)
  *   Optionally the project repo can have 1 branch per OPNFV release if

[opnfv-tech-discuss] [test-wg] docker container versioning

2017-07-10 Thread Alec Hothan (ahothan)

[ cc test-wg - was [opnfv-tech-discuss] Multiple docker containers from one 
project) ]


Hi Fatih

It is generally not easy to deal with container tags that do not include any 
information that links easily to a git repo commit (e.g. a “version” number 
increased by 1 for each build does not tell which git commit was used – might 
be one reason why this was removed)

For example, if we look at the published yardstick containers as of today:

“latest” is generally used to refer to the latest on master at the time of the 
build (so whoever does not care about the exact version and just wants the 
bleeding edge will pick “latest”) – apparently this container is not linked to 
any particular OPNFV release? Or is it implicitly linked to the current latest 
release (Euphrates)?

“stable” is supposed to be the latest stable version (presumably more stable 
than latest), in the current script it is not clear in what conditions a build 
is triggered with BRANCH not master and RELEASE_VERSION is not set (does the 
project owner controls that?) Apparently unrelated to any particular OPNFV 
release as well although would have thought a “danube.3.0-stable” would make 
sense.

“danube.3.0”: related to Danube 3.0 but does not indicate what yardstick repo 
tag it was built from. The git repo has a git tag with the same name 
“danube.3.0”, how are those 2 tags correlated? For example, there is no 
matching git tag for container “colorado.0.15” and there is no matching 
container for git tag “colorado.3.0”.
Also not clear what yardstick project will do to publish a newer version of the 
yardstick container for Danube 3.0?

Project owners should be able to publish finer grain versions of containers in 
a faster pace than the overall OPNFV release (e.g. as frequent as more than 
once a day) and these need to be tracked properly.

The best practice – as seen by most popular container images – is to tag the 
container using a version string that reflects the container source code 
version.
Translating to a project workflow, what is typical:

  *   The project repo uses git tags to version the source code (e.g. “3.2.5”) 
independently of the OPNFV release versioning (e.g. “Danube 3.0”). Such 
versioning should be left at the discretion of the project owners (e.g. Many 
OpenStack projects use the pbr library to take care of component version)
  *   Optionally the project repo can have 1 branch per OPNFV release if 
desired (e.g. “danube”, …) – noting that some projects will not require such 
branches and support every OPNFV release (or a good subset of it) from a single 
master branch (simpler)
  *   Simplest (and this is how the dockerhub automated build works) is to 
trigger a new build either on demand (by project owners) or automatically 
whenever a new git tag is published (by who has permission to do so on that 
project)

I am not familiar with the OPNFV release packaging process (for example how are 
containers tied to a particular release), could someone explain or point to the 
relevant documentation? If you look at the OpenStack model, each release (e.g. 
Newton, Ocata,…) is made of a large number of separate git repos that are all 
versioned independently (i.e. neutron or tempest don’t have version tags that 
contain the openstack release). And each release has a list of all projects 
versions that come with it. Example for Ocata: 
https://releases.openstack.org/ocata/

Is there a similar scheme in OPNFV?


Mark:

  *   Container versioning is orthogonal to the support for multiple containers 
per project (that a project has more than 1 container makes the versioning a 
bit more relevant).
  *   Not being able to rebuild a container from a tag is problematic and I 
agree it needs to be supported



Gabriel:

  *   Using the dockerhub automated build is fine as long as the versioning of 
the built containers is aligned with the OPNFV versioning scheme



Thanks

  Alec




From: Fatih Degirmenci 
Date: Monday, July 10, 2017 at 9:23 AM
To: "Beierl, Mark" , "Alec Hothan (ahothan)" 

Cc: "opnfv-tech-discuss@lists.opnfv.org" 
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Hi,

About the tagging question; in the past we tagged all the images we built and 
stored on Docker hub. The tags for the intermediate versions were set 
incrementally and applied automatically by the build job and release tag was 
applied manually for the release.
But then (some of the) test projects decided not to do that and got rid of 
that. (I don't exactly remember who, why and so on.)

We obviously failed to flag this at that time. This should be discussed by Test 
WG and fixed.

/Fatih

From:  on behalf of "Beierl, Mark" 

Date: Monday, 10 July 2017 at 18:10
To: "Alec Hothan (ahothan)" 
Cc: "opnfv-tech-discuss@lists.opnfv.org" 
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Sorry, Alec, for not responding.  I'm not a releng committer so I thought 
someone from there would have repli

[opnfv-tech-discuss] [mano-wg] ONAP OPNFV Integration efforts

2017-07-10 Thread Prakash Ramchandran
Bryan & ONAP/Opera Integration team,

Can we have some discussions in MANO WG as what this mean for Integration 
Project in OPNFV like Opera, if we need the project to revive as integrating 
with OPNFV for use cases in ONAP like VoLTE, vCPE etc. or some simpler ones 
from OPNFV Models from past work in OPNFV C &D Release?


OPNFV for ONAP developer

Deploy and integrate OPNFV scenario and ONAP instance for developer use

 Should MultiCloud project in OPNAP use this to interface with Opera with 
Registration, etc? refer
https://jira.onap.org/browse/MULTICLOUD-11?src=confmacro

OPNFV for ONAP X-CI

Deploy and integrate OPNFV scenario and ONAP instance for X-CI

 Eg. ONAP Scenario in OPNFV ? OS-ONAP-Nofeature-HA

OPNFV+ONAP CI/CD

Deploy and integrate OPNFV scenario and ONAP instance for full CI/CD/testing

CI/CD testing through FUNCTEST? Any specifics we need to come up in Opera or in 
OPNFV MANO WG?


I tried evaluating them and need support from mano-wg teams to state what 
exactly you would like to facilitate and promote project goals in E/F release 
to achieve this, put out by Infra wg, who are planning for Infra to support 
MANO integration.

Thanks
Prakash



Prakash Ramchandran
[logo_huawei] R&D USA
FutureWei Technologies, Inc
Email: prakash.ramchand...@huawei.com
Work:  +1 (408) 330-5489
Mobile: +1 (408) 406-5810
2330 Central Expy, Santa Clara, CA 95050, USA






___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] VM image file server in OPNFV?

2017-07-10 Thread Alec Hothan (ahothan)
Exactly what I needed, thanks Fatih!


   Alec



From: Fatih Degirmenci 
Date: Monday, July 10, 2017 at 9:14 AM
To: "Alec Hothan (ahothan)" , 
"opnfv-tech-discuss@lists.opnfv.org" 
Subject: Re: [opnfv-tech-discuss] VM image file server in OPNFV?

Hi,

We store this type of artifacts on OPNFV Artifact Repository which is hosted on 
Google Cloud Storage.

http://artifacts.opnfv.org/octopus/docs/octopus_docs/opnfv-artifact-repository.html

Please send a ticket to OPNFV Helpdesk with the details so they can help you.

/Fatih

From:  on behalf of "Alec Hothan 
(ahothan)" 
Date: Monday, 10 July 2017 at 18:00
To: "opnfv-tech-discuss@lists.opnfv.org" 
Subject: [opnfv-tech-discuss] VM image file server in OPNFV?


Hello,

I was wondering if there a file server in OPNFV that can be used by OPNFV 
project owners to store public VM Images (several hundred MBs to 1 GB per 
image) that can be accessed internally (from OPNFV labs) and from the Internet?
OpenStack has a similar server (OpenStack App Catalog).

Thanks

  Alec


___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [opnfv-tsc] [announce] Reminder: TSC meeting time change starting this week

2017-07-10 Thread David McBride
Note that the release meeting is also moving an hour earlier, so it will
continue to follow immediately after the TSC call.

David

On Mon, Jul 10, 2017 at 6:53 AM, Raymond Paik 
wrote:

> All,
>
> A reminder that the weekly TSC meetings will now start an hour earlier at
> 6am Pacific Time starting tomorrow (July 11th).  For those of you in the
> Pacific Time zone, get your extra cup of coffee :-)
>
> Thanks,
>
> Ray
>
> ___
> opnfv-tsc mailing list
> opnfv-...@lists.opnfv.org
> https://lists.opnfv.org/mailman/listinfo/opnfv-tsc
>
>


-- 
*David McBride*
Release Manager, OPNFV
Mobile: +1.805.276.8018
Email/Google Talk: dmcbr...@linuxfoundation.org
Skype: davidjmcbride1
IRC: dmcbride
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] Invitation: [Infra][Pharos][Releng][Octopus][Security] Infra Working ... @ Weekly from 4pm to 5pm on Monday from Mon Jul 17 to Mon Aug 28 (BST) (opnfv-tech-discuss@lists.opnfv.org

2017-07-10 Thread lhinds
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VTIMEZONE
TZID:Europe/London
X-LIC-LOCATION:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T01
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+
TZNAME:GMT
DTSTART:19701025T02
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/London:20170717T16
DTEND;TZID=Europe/London:20170717T17
RRULE:FREQ=WEEKLY;UNTIL=20170828T15Z;BYDAY=MO
DTSTAMP:20170710T163746Z
ORGANIZER;CN=lhi...@redhat.com:mailto:lhi...@redhat.com
UID:tng86d0nks29m1ros316vb5...@google.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=opnfv-tech-discuss@lists.opnfv.org;X-NUM-GUESTS=0:mailto:opnfv-tech
 -disc...@lists.opnfv.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=lhi...@redhat.com;X-NUM-GUESTS=0:mailto:lhi...@redhat.com
CREATED:20170710T163746Z
DESCRIPTION:Weekly infra working group meeting.\n\nUTC Time: 15:00 to 16:00
 \n\nG2M link: https://app.gotomeeting.com/?meetingId=819733085\nView your e
 vent at https://www.google.com/calendar/event?action=VIEW&eid=dG5nODZkMG5rc
 zI5bTFyb3MzMTZ2YjU5NjQgb3BuZnYtdGVjaC1kaXNjdXNzQGxpc3RzLm9wbmZ2Lm9yZw&tok=M
 TcjbGhpbmRzQHJlZGhhdC5jb21lMjQ1YTJiMjVhMTE4N2YwZTNjYjY0Y2FkMTVjYmUyODc5MWEy
 MDdh&ctz=Europe/London&hl=en.
LAST-MODIFIED:20170710T163746Z
LOCATION:https://app.gotomeeting.com/?meetingId=819733085
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:[Infra][Pharos][Releng][Octopus][Security] Infra Working Group
TRANSP:OPAQUE
END:VEVENT
END:VCALENDAR


invite.ics
Description: application/ics
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] Multiple docker containers from one project

2017-07-10 Thread Fatih Degirmenci
Hi,

About the tagging question; in the past we tagged all the images we built and 
stored on Docker hub. The tags for the intermediate versions were set 
incrementally and applied automatically by the build job and release tag was 
applied manually for the release.
But then (some of the) test projects decided not to do that and got rid of 
that. (I don't exactly remember who, why and so on.)

We obviously failed to flag this at that time. This should be discussed by Test 
WG and fixed.

/Fatih

From:  on behalf of "Beierl, Mark" 

Date: Monday, 10 July 2017 at 18:10
To: "Alec Hothan (ahothan)" 
Cc: "opnfv-tech-discuss@lists.opnfv.org" 
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Sorry, Alec, for not responding.  I'm not a releng committer so I thought 
someone from there would have replied.  You are correct that the tag is 
provided by the person running the job in Jenkins and passed through to 
opnfv-docker.sh.

As for the git clone issue, or pip install from git, there is no tag provided.  
This is a concern that I have with the way there is a separation of docker 
build (in releng) and git clone.  We cannot actually rebuild from a label at 
this time.

Perhaps this is a bigger issue that needs to be discussed before we can 
properly address multiple docker builds.

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

On Jul 10, 2017, at 11:34, Alec Hothan (ahothan) 
mailto:ahot...@cisco.com>> wrote:


Projects that do not have pyPI packages (or the right version of PyPI package 
published) might prefer to do a git clone instead and either install it 
directly or using pip install from the clone in the container.
Some Dockerfile may prefer to directly install from the current (cloned) repo 
(and avoid a git clone) but this might accidentally (or purposely) include 
local patches into the built container.
There are many valid ways to skin the cat…

I did not get any feedback on a previous question I had on container 
versioning/tagging.
The container versioning currently used is based on the branch name followed by 
a release name (e.g. “danube.3.0”) with the addition of latest, stable and 
master.

From opnfv-docker.sh:

# Get tag version
echo "Current branch: $BRANCH"

BUILD_BRANCH=$BRANCH

if [[ "$BRANCH" == "master" ]]; then
DOCKER_TAG="latest"
elif [[ -n "${RELEASE_VERSION-}" ]]; then
DOCKER_TAG=${BRANCH##*/}.${RELEASE_VERSION}
# e.g. danube.1.0, danube.2.0, danube.3.0
else
DOCKER_TAG="stable"
fi

if [[ -n "${COMMIT_ID-}" && -n "${RELEASE_VERSION-}" ]]; then
   DOCKER_TAG=$RELEASE_VERSION
BUILD_BRANCH=$COMMIT_ID
fi

If branch is master, the tag is master, if RELEASE_VERSION is defined, it is 
branch. else it is stable.
And lastly the above is overridden to RELEASE_VERSION if RELEASE_VERSION is set 
and COMMIT_ID is defined (wonder how that works with 2 branches with same 
RELEASE_VERSION?).

There are few gaps that don’t seem to be covered by this versioning - perhaps 
there is a way that project owners who publish containers work around them?

  *   How are the containers for multiple versions of master at various commits 
published ? They all seem to have the “master” tag
  *   For a given branch (say Danube), same question for a given release (say 
for Danube 3.0, one might have multiple versions of a container with various 
patches)
  *   Some projects may have containers that actually work with multiple OPNFV 
releases, will they be forced to publish the same container image with 
different tags (e.g. danube.3.0 and euphrates.1.0)?

In general a docker container tag would have version in it (e.g. 3.2.1) and 
sometimes along with a text describing some classification (indicating for 
example variations of the same source code version). In the case of OPNFV.

I’m not quite sure about when stable is used from the look of the script.
I’d be interested to know how current project docker owners deal with the above 
and if there is any interest to address them.

Thanks

  Alec



From: 
mailto:opnfv-tech-discuss-boun...@lists.opnfv.org>>
 on behalf of Cedric OLLIVIER 
mailto:ollivier.ced...@gmail.com>>
Date: Monday, July 10, 2017 at 12:20 AM
To: "Beierl, Mark" mailto:mark.bei...@dell.com>>
Cc: 
"opnfv-tech-discuss@lists.opnfv.org" 
mailto:opnfv-tech-discuss@lists.opnfv.org>>
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

I'm sorry I don't understand the point of git clone.
Here we simply install Functest via the python package.
Pip install does a local copy because it's not published in PyPI yet and then 
removes it after installing the package.

Why should we clone again the repository?

Cédric

2017-07-10 3:10 GMT+02:00 Beierl, Mark 
mailto:mark.bei...@dell.com>>:
Why should we avoid copy?  Why do a git clone of the existing git clone?  
Almost every dockerfile exa

Re: [opnfv-tech-discuss] intel pod21 accessing issue

2017-07-10 Thread David McBride
Narinder,

Could we please get an update on the status of Joid for Danube 3?  Reminder
that the release is scheduled for this week (July 14). Thanks.

David

On Sun, Jul 2, 2017 at 4:00 PM, Narinder Gupta  wrote:

> DAvid,
> Yes I worked on it today and now Intel pod21 is reconnected to CI but it
> might be a day or two once all scenario run once though.
>
> Thanks and Regards,
> Narinder
>
> On Jul 1, 2017, at 6:10 PM, David McBride 
> wrote:
>
> Narinder & Ross,
>
> What's the status? Is Joid connected to CI?  If not, what's the estimate
> for when it will be connected?  Thanks.
>
> David
>
> On Wed, Jun 28, 2017 at 9:59 AM Brattain, Ross B <
> ross.b.bratt...@intel.com> wrote:
>
>> Yes, we just rebooted.
>>
>>
>>
>> *From:* David McBride [mailto:dmcbr...@linuxfoundation.org]
>> *Sent:* Wednesday, June 28, 2017 9:55 AM
>> *To:* Beierl, Mark 
>> *Cc:* Brattain, Ross B ; Narinder Gupta <
>> narinder.gu...@canonical.com>; TECH-DISCUSS OPNFV <
>> opnfv-tech-discuss@lists.opnfv.org>; Cooper, Trevor <
>> trevor.coo...@intel.com>
>> *Subject:* Re: intel pod21 accessing issue
>>
>>
>>
>> +Trevor, tech-discuss
>>
>>
>>
>> Team,
>>
>>
>>
>> There seems to be a systemic issue with Intel PODs, beyond the issue
>> originally reported by Narinder.  See Mark's comments in the thread below.
>>
>>
>>
>> Ross / Trevor - could you please look into this ASAP?  Thanks.
>>
>>
>>
>> David
>>
>>
>>
>> On Wed, Jun 28, 2017 at 9:41 AM, Beierl, Mark 
>> wrote:
>>
>> Hello,
>>
>>
>>
>> It would appear that not just pod 21 is affected.  Pod 24 is in the same
>> situation where the VPN is failing to pass any traffic.  From Jenkins [1] I
>> can see that nearly every Intel pod is offline.  This means no daily jobs
>> can run.
>>
>>
>>
>> Ross, is there someone at Intel that should be contacted, or are you the
>> virtual Jack while he's off?
>>
>>
>>
>> David, a broadcast to the opnfv-test-discuss list is probably a good idea
>> so that everyone is aware of this outage.
>>
>>
>>
>> [1] https://build.opnfv.org/ci/computer/
>>
>>
>>
>> Regards,
>>
>> Mark
>>
>>
>>
>> *Mark Beierl*
>>
>> SW System Sr Principal Engineer
>>
>> *Dell **EMC* | Office of the CTO
>>
>> mobile +1 613 314 8106 <1-613-314-8106>
>>
>> *mark.bei...@dell.com *
>>
>>
>>
>> On Jun 28, 2017, at 08:40, Beierl, Mark  wrote:
>>
>>
>>
>> It appears to be the same problem for me.  Once connected to the VPN, I
>> cannot route any traffic:
>>
>>
>>
>> Wed Jun 28 08:37:13 2017 us=943655 /sbin/ip link set dev tun0 up mtu 1500
>>
>> Wed Jun 28 08:37:13 2017 us=946972 /sbin/ip addr add dev tun0 local
>> 10.10.210.205 peer 10.10.210.206
>>
>> Wed Jun 28 08:37:13 2017 us=953666 /sbin/ip route add 10.10.210.0/24 via
>> 10.10.210.206
>>
>>
>>
>> ping 10.10.210.206
>>
>> PING 10.10.210.206 (10.10.210.206) 56(84) bytes of data.
>>
>> --- 10.10.210.206 ping statistics ---
>>
>> 3 packets transmitted, 0 received, 100% packet loss, time 2030ms
>>
>>
>>
>> ping 10.10.210.1
>>
>> PING 10.10.210.1 (10.10.210.1) 56(84) bytes of data.
>>
>> --- 10.10.210.1 ping statistics ---
>>
>> 3 packets transmitted, 0 received, 100% packet loss, time 2029ms
>>
>>
>>
>> Looks like something is wrong with the VPN server.
>>
>>
>>
>> Regards,
>>
>> Mark
>>
>>
>>
>> *Mark Beierl*
>>
>> SW System Sr Principal Engineer
>>
>> *Dell **EMC* | Office of the CTO
>>
>> mobile +1 613 314 8106 <1-613-314-8106>
>>
>> *mark.bei...@dell.com *
>>
>>
>>
>> On Jun 27, 2017, at 22:41, David McBride 
>> wrote:
>>
>>
>>
>> +Mark
>>
>>
>>
>> On Wed, Jun 28, 2017 at 9:01 AM, Narinder Gupta <
>> narinder.gu...@canonical.com> wrote:
>>
>> Ross,
>>
>> It was working but suddenly it stopped responding now and this time seems
>> to be access from vpn as i can not access even IPMI now.
>>
>>
>> Thanks and Regards,
>>
>> Narinder Gupta (PMP)   narinder.gu...@canonical.com
>>
>> Canonical, Ltd.narindergupta [irc.freenode.net]
>>
>> +1.281.736.5150 <(281)%20736-5150>
>> narindergupta2007[skype]
>>
>>
>>
>> Ubuntu- Linux for human beings | www.ubuntu.com | www.canonical.com
>>
>>
>>
>> On Tue, Jun 27, 2017 at 6:05 PM, Brattain, Ross B <
>> ross.b.bratt...@intel.com> wrote:
>>
>> Hi Narinder,
>>
>>
>>
>> Was Mark's help sufficient to get you access to jumphost?  Is everything
>> working?
>>
>>
>>
>> Thanks,
>>
>> Ross
>>
>>
>>
>> *From:* Narinder Gupta [mailto:narinder.gu...@canonical.com]
>> *Sent:* Tuesday, June 27, 2017 7:22 AM
>> *To:* Brattain, Ross B 
>> *Cc:* David McBride 
>> *Subject:* intel pod21 accessing issue
>>
>>
>>
>> Hi Ross I am having issue in accessing the jumphost at Intel pod21. I can
>> not even ping the IP of the jumphost  *10.10.210.20*
>>
>>
>>
>> I can ping the IPMI *10.10.210.10*
>>
>> Thanks and Regards,
>>
>> Narinder Gupta (PMP)   narinder.gu...@canonical.com
>>
>> Canonical, Ltd.narindergupta [irc.freenode.net]
>>
>> +1.281.736.5150 <(281)%20736-5150>
>> narindergupta2007[skype]
>>
>>

Re: [opnfv-tech-discuss] VM image file server in OPNFV?

2017-07-10 Thread Fatih Degirmenci
Hi,

We store this type of artifacts on OPNFV Artifact Repository which is hosted on 
Google Cloud Storage.

http://artifacts.opnfv.org/octopus/docs/octopus_docs/opnfv-artifact-repository.html

Please send a ticket to OPNFV Helpdesk with the details so they can help you.

/Fatih

From:  on behalf of "Alec Hothan 
(ahothan)" 
Date: Monday, 10 July 2017 at 18:00
To: "opnfv-tech-discuss@lists.opnfv.org" 
Subject: [opnfv-tech-discuss] VM image file server in OPNFV?


Hello,

I was wondering if there a file server in OPNFV that can be used by OPNFV 
project owners to store public VM Images (several hundred MBs to 1 GB per 
image) that can be accessed internally (from OPNFV labs) and from the Internet?
OpenStack has a similar server (OpenStack App Catalog).

Thanks

  Alec


___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] Multiple docker containers from one project

2017-07-10 Thread Beierl, Mark
Sorry, Alec, for not responding.  I'm not a releng committer so I thought 
someone from there would have replied.  You are correct that the tag is 
provided by the person running the job in Jenkins and passed through to 
opnfv-docker.sh.

As for the git clone issue, or pip install from git, there is no tag provided.  
This is a concern that I have with the way there is a separation of docker 
build (in releng) and git clone.  We cannot actually rebuild from a label at 
this time.

Perhaps this is a bigger issue that needs to be discussed before we can 
properly address multiple docker builds.

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

On Jul 10, 2017, at 11:34, Alec Hothan (ahothan) 
mailto:ahot...@cisco.com>> wrote:


Projects that do not have pyPI packages (or the right version of PyPI package 
published) might prefer to do a git clone instead and either install it 
directly or using pip install from the clone in the container.
Some Dockerfile may prefer to directly install from the current (cloned) repo 
(and avoid a git clone) but this might accidentally (or purposely) include 
local patches into the built container.
There are many valid ways to skin the cat…

I did not get any feedback on a previous question I had on container 
versioning/tagging.
The container versioning currently used is based on the branch name followed by 
a release name (e.g. “danube.3.0”) with the addition of latest, stable and 
master.

From opnfv-docker.sh:

# Get tag version
echo "Current branch: $BRANCH"

BUILD_BRANCH=$BRANCH

if [[ "$BRANCH" == "master" ]]; then
DOCKER_TAG="latest"
elif [[ -n "${RELEASE_VERSION-}" ]]; then
DOCKER_TAG=${BRANCH##*/}.${RELEASE_VERSION}
# e.g. danube.1.0, danube.2.0, danube.3.0
else
DOCKER_TAG="stable"
fi

if [[ -n "${COMMIT_ID-}" && -n "${RELEASE_VERSION-}" ]]; then
   DOCKER_TAG=$RELEASE_VERSION
BUILD_BRANCH=$COMMIT_ID
fi

If branch is master, the tag is master, if RELEASE_VERSION is defined, it is 
branch. else it is stable.
And lastly the above is overridden to RELEASE_VERSION if RELEASE_VERSION is set 
and COMMIT_ID is defined (wonder how that works with 2 branches with same 
RELEASE_VERSION?).

There are few gaps that don’t seem to be covered by this versioning - perhaps 
there is a way that project owners who publish containers work around them?

  *   How are the containers for multiple versions of master at various commits 
published ? They all seem to have the “master” tag
  *   For a given branch (say Danube), same question for a given release (say 
for Danube 3.0, one might have multiple versions of a container with various 
patches)
  *   Some projects may have containers that actually work with multiple OPNFV 
releases, will they be forced to publish the same container image with 
different tags (e.g. danube.3.0 and euphrates.1.0)?


In general a docker container tag would have version in it (e.g. 3.2.1) and 
sometimes along with a text describing some classification (indicating for 
example variations of the same source code version). In the case of OPNFV.

I’m not quite sure about when stable is used from the look of the script.
I’d be interested to know how current project docker owners deal with the above 
and if there is any interest to address them.

Thanks

  Alec



From: 
mailto:opnfv-tech-discuss-boun...@lists.opnfv.org>>
 on behalf of Cedric OLLIVIER 
mailto:ollivier.ced...@gmail.com>>
Date: Monday, July 10, 2017 at 12:20 AM
To: "Beierl, Mark" mailto:mark.bei...@dell.com>>
Cc: 
"opnfv-tech-discuss@lists.opnfv.org" 
mailto:opnfv-tech-discuss@lists.opnfv.org>>
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

I'm sorry I don't understand the point of git clone.
Here we simply install Functest via the python package.
Pip install does a local copy because it's not published in PyPI yet and then 
removes it after installing the package.

Why should we clone again the repository?

Cédric

2017-07-10 3:10 GMT+02:00 Beierl, Mark 
mailto:mark.bei...@dell.com>>:
Why should we avoid copy?  Why do a git clone of the existing git clone?  
Almost every dockerfile example I have seen uses copy, not a second got 
checkout of the same code.
Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

On Jul 9, 2017, at 21:00, Cedric OLLIVIER 
mailto:ollivier.ced...@gmail.com>> wrote:
No we cannot (parent directory) and we should mostly avoid copying files 
(except for configurations).

For instance, you could have a look to 
https://gerrit.opnfv.org/gerrit/#/c/36963/.
All Dockerfiles simply download Alpine packages, python packages (Functest + 
its dependencies) and upper constraints files.
testcases.yaml is copied from host as it differs between our containers (smoke, 
healthche

Re: [opnfv-tech-discuss] yardstick ping test -failed

2017-07-10 Thread Ross Brattain
Hi,


This usually indicates that yardstick is not able to contact the VM using the 
floating IP provided by OpenStack.  This suggests that OpenStack is not 
configured correctly.


Can you ping the host 172.18.0.228? 


Otherwise please provide info on which installer you are testing with.


You can contact us on IRC,  freenode #opnfv-yardstick channel.


Thanks,

Ross


On 07/10/2017 03:22 AM, Kalaivani R wrote:

​​Hi,


can any one tel me how to fix the ssh error happening during ping test in 
yardstick.

 

log:

2017-07-10 07:01:49,034 yardstick.benchmark.runners.duration duration.py:43 
INFO worker START, duration 60 sec, class 
2017-07-10 07:01:49,034 yardstick.ssh ssh.py:115 DEBUG user:root 
host:172.18.0.228
2017-07-10 07:01:50,036 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:52,039 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:54,041 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:56,044 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:58,046 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:00,048 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:02,051 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:04,052 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:06,055 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:08,057 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:10,060 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:12,063 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:14,065 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:16,068 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:18,070 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:20,072 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:22,075 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:24,077 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:26,080 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:28,082 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:30,084 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
^CProcess Process-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Process Dumper:
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Traceback (most recent call last):
  File "/usr/local/bin/yardstick", line 11, in 
load_entry_point('yardstick', 'console_scripts', 'yardstick')()
  File "/home/opnfv/repos/yardstick/yardstick/main.py", line 49, in main
self.run()
self.run()
YardstickCLI().main(sys.argv[1:])
  File "/usr/lib/

[opnfv-tech-discuss] VM image file server in OPNFV?

2017-07-10 Thread Alec Hothan (ahothan)

Hello,

I was wondering if there a file server in OPNFV that can be used by OPNFV 
project owners to store public VM Images (several hundred MBs to 1 GB per 
image) that can be accessed internally (from OPNFV labs) and from the Internet?
OpenStack has a similar server (OpenStack App Catalog).

Thanks

  Alec


___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] OPNFV Danube SFC physical scenario deployment doubts

2017-07-10 Thread Manuel Buil
Hi Andres,

Please find my answers below.

Try to run the environment which is supported and after that you can
add stuff like ceilometer. Having a base that works should be your
target right now, later you can add stuff to that base.

Regards,
Manuel

On Mon, 2017-07-10 at 13:51 +0200,
andres.sanchez.ra...@estudiant.upc.edu wrote:
> Hello Manuel,
> 
> Thank you for your quick response! you are right that scenario is
> not  
> supported sorry for that! I am trying to set up a NFV environment  
> (using SFC) with the ability to monitor cloud resources using  
> Openstack APIs, and according to what i have been able to read  
> ceilometer provides that information!
> 
> Ok since no SFC chains are supposed to be declared i am guessing i
> am  
> close to deploying the scenario correctly, but i am having these  
> issues (let me know if you how to address them):
> 
> - Once I create a VM I am not being able to access them (ping fails).

If you list the VM with 'nova list', does it say it is active? Does it
list an ip? Check that it receives an IP lease from the server with
nova console-log


> - When I restart a compute node it will take an IP address from my  
> public network and after that I cannot ssh into the node.

Strange... but why do you reboot a compute? Are you trying to test some
HA behaviour?

> 
> I have the following doubts regarding the deployment options being
> set  
> up in fuel:
> 
> - When creating the environment in fuel what networking option
> should  
> i choose: Neutron with ML2 plugin & Neutron with tunneling  
> segmentation or OpenDaylight with tunneling segmentation?

If you want to try SFC, you should use OpenDaylight

> - Right now i am choosing the following role distribution:  
> controller-ceph, controller-tacker, controller-ODL, compute-ceph,  
> compute-ceph. Is this appropriate?

Start with a simpler env. with one controller and one compute. The
controller should act as cotroller, OpenDaylight controller and Tacker
VNF Manager. Ceph should work but better simpligy things and use
'Cinder LVM'

> - I am using the option that states: Assign public network to all  
> nodes (I read in the guide that this should be checked). But i
> think  
> this is causing the communication issue to the nodes.

I have that option checked too but I never reboot computes ;)

> - I install Open vSwitch with the checkbox that says install NSH.
> Is  
> this correct?

Yes

> - When marking the ODL Plugin I only check the box that says: SFC  
> features with NetVirt classifier. What about use ODL to manage L3  
> traffic? should I mark it.

It is not needed for sfc. That one allows you to connect several
openstack deployments among them using L3VPN.

> - Do I need to install any other features in ODL (i.e l2switch) in  
> order to communicate with my VMs or do I need to declare some SFC  
> chains?

Nothing else. When the deployment succeeds, you should be able to run
tests. To run them, you should follow this guide (note that we just
realized that Danube 2.0 is throwing errors... try Danube 1.0 instead):

https://wiki.opnfv.org/display/sfc/OPNFV-SFC+Functest+test+cases

> 
> Thank you in advance for your help, i tried to write as clear as i  
> could but english is not my native tongue

You write very clearly!

> 
> Quoting "Manuel Buil" :
> 
> > 
> > Hi Andres,
> > 
> > Unfortunately, that scenario is not supported in Danube. These are
> > the
> > ones supported:
> > 
> > https://wiki.opnfv.org/display/SWREL/Danube+Scenario+Status
> > 
> > What statistics do you need from SFC? Maybe you can collect them in
> > another way.
> > 
> > When the sfc scenarios are successfully deployed, no VMs or SFC
> > chains
> > exist.
> > 
> > Regards,
> > Manuel
> > 
> > 
> > On Fri, 2017-07-07 at 10:58 +0200,
> > andres.sanchez.ra...@estudiant.upc.edu wrote:
> > > 
> > > Hello,
> > > 
> > > My name is Andrés Sánchez, i am working on deploying the OPNFV
> > > HA  
> > > scenario on a laboratory on my university. I have been trying
> > > to  
> > > deploy the scenario manually but have been encountering some
> > > problems.  
> > > My actual environment consists of:
> > > 
> > > 5 Nodes: 2 CPUs, 8GB RAM.
> > > 1 Fuel master: 2 CPUs, 8GB RAM.
> > > 1 workstation: 2CPUs, 4GB RAM.
> > > 
> > > I am trying to deploy the  
> > > "ha_odl-l2_sfc_heat_ceilometer_scenario.yaml" on this lab, i
> > > have  
> > > previously deployed other Openstack scenarios on this lab so
> > > the  
> > > networking configuration is properly configured: on each
> > > Openstack  
> > > node port i assigned untagged traffic to PXE-admin, vlan 200 to
> > > public  
> > > network, and defaults to storage,management and private
> > > (102,101,103).  
> > > PXE-admin network is 192.168.100.0/24 and public network is  
> > > 172.16.10.0/24 with gateways being .1 respectively.
> > > 
> > > I already set up the fuel master with the Danube ISO  
> > > "opnfv-danube.1.0.iso". I previously correctly deployed an
> > > openstack  
> > > cluster manually i

[opnfv-tech-discuss] [release][euphrates] Milestone 3.2 - installers deploy scenario with integrated SDN and pass health check

2017-07-10 Thread David McBride
Installer teams,

Milestone 3.2 is scheduled for tomorrow, July 11.  Please provide the
following:

   - Identify a scenario that includes an integrated SDN controller that
   demonstrates that your installer meets the requirements of MS3.2.
   - Provide a pointer to test results that demonstrate that health check
   passes for the identified scenario.

David

-- 
*David McBride*
Release Manager, OPNFV
Mobile: +1.805.276.8018
Email/Google Talk: dmcbr...@linuxfoundation.org
Skype: davidjmcbride1
IRC: dmcbride
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [SFC] Nominating Manuel Buil as OPNFV SFC PTL

2017-07-10 Thread Brady Johnson
Hello,

Since I stepped down as PTL [0] and Manuel Buil nominated himself as PTL
[1] I would like to kick-off the vote for his PTL nomination.

Committers, please vote +2 or -2 on this Gerrit [2] for Manuel's nomination.

Good luck Manuel, Im convinced you'll do great!! :)

Regards,

Brady
bjohn...@inocybe.ca


[0]
https://lists.opnfv.org/pipermail/opnfv-tech-discuss/2017-July/016913.html
[1]
https://lists.opnfv.org/pipermail/opnfv-tech-discuss/2017-July/016926.html
[2] https://gerrit.opnfv.org/gerrit/37125
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [SFC] Stepping down as OPNFV SFC PTL

2017-07-10 Thread Brady Johnson
Thanks for the kind words guys!!

You'll definitely still see me around :)

Brady


Regards,

Brady
bjohn...@inocybe.ca


On Mon, Jul 10, 2017 at 6:23 AM, Raymond Paik 
wrote:

> Echoing what Juan said...
>
> Brady: Thanks for all your work as a PTL over the past 2+ years, but
> definitely look forward seeing you around :-)
>
> On Wed, Jul 5, 2017 at 12:45 AM, Juan Vidal ALLENDE <
> juan.vidal.alle...@ericsson.com> wrote:
>
>> Hi Brady,
>>
>> It's sad to hear that, OPNFV-SFC has been a great to project to work in,
>> and your leadership has been very encouraging. I hope that you still have
>> some time to contribute to the project, and I wish you all the best in your
>> new position.
>>
>> Regards,
>> Juan
>>
>>
>> On mar, 2017-07-04 at 12:28 +0200, Brady Johnson wrote:
>>
>>
>> Hello all,
>>
>> As you may have heard, I recently left Ericsson, and now work for Inocybe
>> Technologies . As such, I need to focus more on
>> OpenDaylight, so I will step down now as OPNFV SFC PTL. I will absolutely
>> still be involved in OPNFV SFC as a committer, I just need to let somebody
>> else take the project over.
>>
>> Its been a great journey as PTL in OPNFV that started back in May of 2015
>> when I created the project. I feel very fortunate to have been able to work
>> in such an amazing community and with such talented people. The journey has
>> only just begun, and Im convinced the future is very bright.
>>
>> Thank you everybody for all of your support, contributions, and patience
>> :)
>>
>> I'll send out a separate email now asking for OPNFV SFC PTL nominations
>> from the current committers.
>>
>> Regards,
>>
>> Brady
>> bjohn...@inocybe.com
>>
>> ___
>> opnfv-tech-discuss mailing 
>> listopnfv-tech-discuss@lists.opnfv.orghttps://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
>>
>>
>> ___
>> opnfv-tech-discuss mailing list
>> opnfv-tech-discuss@lists.opnfv.org
>> https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
>>
>>
>
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] Multiple docker containers from one project

2017-07-10 Thread Alec Hothan (ahothan)

Projects that do not have pyPI packages (or the right version of PyPI package 
published) might prefer to do a git clone instead and either install it 
directly or using pip install from the clone in the container.
Some Dockerfile may prefer to directly install from the current (cloned) repo 
(and avoid a git clone) but this might accidentally (or purposely) include 
local patches into the built container.
There are many valid ways to skin the cat…

I did not get any feedback on a previous question I had on container 
versioning/tagging.
The container versioning currently used is based on the branch name followed by 
a release name (e.g. “danube.3.0”) with the addition of latest, stable and 
master.

From opnfv-docker.sh:

# Get tag version
echo "Current branch: $BRANCH"

BUILD_BRANCH=$BRANCH

if [[ "$BRANCH" == "master" ]]; then
DOCKER_TAG="latest"
elif [[ -n "${RELEASE_VERSION-}" ]]; then
DOCKER_TAG=${BRANCH##*/}.${RELEASE_VERSION}
# e.g. danube.1.0, danube.2.0, danube.3.0
else
DOCKER_TAG="stable"
fi

if [[ -n "${COMMIT_ID-}" && -n "${RELEASE_VERSION-}" ]]; then
   DOCKER_TAG=$RELEASE_VERSION
BUILD_BRANCH=$COMMIT_ID
fi

If branch is master, the tag is master, if RELEASE_VERSION is defined, it is 
branch. else it is stable.
And lastly the above is overridden to RELEASE_VERSION if RELEASE_VERSION is set 
and COMMIT_ID is defined (wonder how that works with 2 branches with same 
RELEASE_VERSION?).

There are few gaps that don’t seem to be covered by this versioning - perhaps 
there is a way that project owners who publish containers work around them?

  *   How are the containers for multiple versions of master at various commits 
published ? They all seem to have the “master” tag
  *   For a given branch (say Danube), same question for a given release (say 
for Danube 3.0, one might have multiple versions of a container with various 
patches)
  *   Some projects may have containers that actually work with multiple OPNFV 
releases, will they be forced to publish the same container image with 
different tags (e.g. danube.3.0 and euphrates.1.0)?

In general a docker container tag would have version in it (e.g. 3.2.1) and 
sometimes along with a text describing some classification (indicating for 
example variations of the same source code version). In the case of OPNFV.

I’m not quite sure about when stable is used from the look of the script.
I’d be interested to know how current project docker owners deal with the above 
and if there is any interest to address them.

Thanks

  Alec



From:  on behalf of Cedric OLLIVIER 

Date: Monday, July 10, 2017 at 12:20 AM
To: "Beierl, Mark" 
Cc: "opnfv-tech-discuss@lists.opnfv.org" 
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

I'm sorry I don't understand the point of git clone.
Here we simply install Functest via the python package.
Pip install does a local copy because it's not published in PyPI yet and then 
removes it after installing the package.

Why should we clone again the repository?

Cédric

2017-07-10 3:10 GMT+02:00 Beierl, Mark 
mailto:mark.bei...@dell.com>>:
Why should we avoid copy?  Why do a git clone of the existing git clone?  
Almost every dockerfile example I have seen uses copy, not a second got 
checkout of the same code.
Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

On Jul 9, 2017, at 21:00, Cedric OLLIVIER 
mailto:ollivier.ced...@gmail.com>> wrote:
No we cannot (parent directory) and we should mostly avoid copying files 
(except for configurations).

For instance, you could have a look to 
https://gerrit.opnfv.org/gerrit/#/c/36963/.
All Dockerfiles simply download Alpine packages, python packages (Functest + 
its dependencies) and upper constraints files.
testcases.yaml is copied from host as it differs between our containers (smoke, 
healthcheck...).
Cédric


2017-07-10 1:25 GMT+02:00 Beierl, Mark 
mailto:mark.bei...@dell.com>>:
My only concern is for dockerfiles that do a "COPY . /home" in them. That means 
all the code would be located under the docker directory.

I suppose multiple ../ paths can be used instead.

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

On Jul 9, 2017, at 19:03, Julien 
mailto:julien...@gmail.com>> wrote:
Hi Cédric,

Patch in  https://gerrit.opnfv.org/gerrit/#/c/36963/ is exact what I mean. 
Let's collect the opinions from the releng team.

Julien



Cedric OLLIVIER 
mailto:ollivier.ced...@gmail.com>>于2017年7月10日周一 
上午4:15写道:
Hello,

Please see https://gerrit.opnfv.org/gerrit/#/c/36963/ which introduces several 
containers for Functest too.
I think the tree conforms with the previous requirements.

Automating builds on Docker Hub is a good solution too.
Cédric

2017-07-09 12:10 GMT+02:00 Julien 
mailto:julien...@gmail.com>>:
Hi Jose,

Acc

[opnfv-tech-discuss] [announce] Reminder: TSC meeting time change starting this week

2017-07-10 Thread Raymond Paik
All,

A reminder that the weekly TSC meetings will now start an hour earlier at
6am Pacific Time starting tomorrow (July 11th).  For those of you in the
Pacific Time zone, get your extra cup of coffee :-)

Thanks,

Ray
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] OPNFV Danube SFC physical scenario deployment doubts

2017-07-10 Thread andres . sanchez . ramos

Hello Manuel,

Thank you for your quick response! you are right that scenario is not  
supported sorry for that! I am trying to set up a NFV environment  
(using SFC) with the ability to monitor cloud resources using  
Openstack APIs, and according to what i have been able to read  
ceilometer provides that information!


Ok since no SFC chains are supposed to be declared i am guessing i am  
close to deploying the scenario correctly, but i am having these  
issues (let me know if you how to address them):


- Once I create a VM I am not being able to access them (ping fails).
- When I restart a compute node it will take an IP address from my  
public network and after that I cannot ssh into the node.


I have the following doubts regarding the deployment options being set  
up in fuel:


- When creating the environment in fuel what networking option should  
i choose: Neutron with ML2 plugin & Neutron with tunneling  
segmentation or OpenDaylight with tunneling segmentation?
- Right now i am choosing the following role distribution:  
controller-ceph, controller-tacker, controller-ODL, compute-ceph,  
compute-ceph. Is this appropriate?
- I am using the option that states: Assign public network to all  
nodes (I read in the guide that this should be checked). But i think  
this is causing the communication issue to the nodes.
- I install Open vSwitch with the checkbox that says install NSH. Is  
this correct?
- When marking the ODL Plugin I only check the box that says: SFC  
features with NetVirt classifier. What about use ODL to manage L3  
traffic? should I mark it.
- Do I need to install any other features in ODL (i.e l2switch) in  
order to communicate with my VMs or do I need to declare some SFC  
chains?


Thank you in advance for your help, i tried to write as clear as i  
could but english is not my native tongue


Quoting "Manuel Buil" :


Hi Andres,

Unfortunately, that scenario is not supported in Danube. These are the
ones supported:

https://wiki.opnfv.org/display/SWREL/Danube+Scenario+Status

What statistics do you need from SFC? Maybe you can collect them in
another way.

When the sfc scenarios are successfully deployed, no VMs or SFC chains
exist.

Regards,
Manuel


On Fri, 2017-07-07 at 10:58 +0200,
andres.sanchez.ra...@estudiant.upc.edu wrote:

Hello,

My name is Andrés Sánchez, i am working on deploying the OPNFV HA  
scenario on a laboratory on my university. I have been trying to  
deploy the scenario manually but have been encountering some
problems.  
My actual environment consists of:

5 Nodes: 2 CPUs, 8GB RAM.
1 Fuel master: 2 CPUs, 8GB RAM.
1 workstation: 2CPUs, 4GB RAM.

I am trying to deploy the  
"ha_odl-l2_sfc_heat_ceilometer_scenario.yaml" on this lab, i have  
previously deployed other Openstack scenarios on this lab so the  
networking configuration is properly configured: on each Openstack  
node port i assigned untagged traffic to PXE-admin, vlan 200 to
public  
network, and defaults to storage,management and private
(102,101,103).  
PXE-admin network is 192.168.100.0/24 and public network is  
172.16.10.0/24 with gateways being .1 respectively.

I already set up the fuel master with the Danube ISO  
"opnfv-danube.1.0.iso". I previously correctly deployed an
openstack  
cluster manually installing ODL and tacker plugins, but then i
found  
that did not appear to be correctly configured and neither ODL (I  
created a couple of VMs and could not connect to them). When
creating  
the environment which option is supposed to be selected: Neutron
with  
ML2 plugin & Neutron with tunneling segmentation or OpenDaylight
with  
tunneling segmentation. Another thing i found is that fuel did not  
provide the option for adding Telemetry-Mongo DB in node assignment
(i  
am requiring to have Telemetry because i need to check the meters
it  
provides). I did check the box for ?Assign public network to all
nodes?

I am trying to deploy now from my workstation but i have doubts  
regarding the way the DHA-DEA files are supposed to be written, i
am  
interested in:
A) Using the actual fuel master as fuel:

sudo bash ./deploy.sh -b file:///home/lab232/fuel/deploy/config -f
-l  
devel-pipeline -p lab232 -s  
ha_odl-l2_sfc_heat_ceilometer_scenario.yaml -i  
file:///home/lab232/opnfv-danube.1.0.iso -e

B) Create a VM inside my workstation to host Fuel master and use
all  
the other nodes as Openstack nodes.

sudo bash ./deploy.sh -b file:///home/lab232/fuel/deploy/config -F
-l  
devel-pipeline -p lab232 -s  
ha_odl-l2_sfc_heat_ceilometer_scenario.yaml -i  
file:///home/lab232/opnfv-danube.1.0.iso -e

For both scenarios i have encountered several issues an errors, i
am  
attaching the configuration files i use. Could you please provide
some  
guidance in the correct way to write these files.

When i try to deploy scenario A i am finding ipmi adapter errors:
Exception: Address lookup for None failed
Could not open socket!
Error: Unable to establish IPMI v2 / RMCP+ session

When i try to deploy

[opnfv-tech-discuss] yardstick ping test -failed

2017-07-10 Thread Kalaivani R
​​Hi,


can any one tel me how to fix the ssh error happening during ping test in 
yardstick.



log:

2017-07-10 07:01:49,034 yardstick.benchmark.runners.duration duration.py:43 
INFO worker START, duration 60 sec, class 
2017-07-10 07:01:49,034 yardstick.ssh ssh.py:115 DEBUG user:root 
host:172.18.0.228
2017-07-10 07:01:50,036 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:52,039 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:54,041 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:56,044 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:58,046 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:00,048 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:02,051 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:04,052 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:06,055 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:08,057 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:10,060 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:12,063 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:14,065 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:16,068 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:18,070 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:20,072 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:22,075 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:24,077 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:26,080 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:28,082 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:30,084 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
^CProcess Process-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Process Dumper:
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Traceback (most recent call last):
  File "/usr/local/bin/yardstick", line 11, in 
load_entry_point('yardstick', 'console_scripts', 'yardstick')()
  File "/home/opnfv/repos/yardstick/yardstick/main.py", line 49, in main
self.run()
self.run()
YardstickCLI().main(sys.argv[1:])
  File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
  File "/home/opnfv/repos/yardstick/yardstick/cmd/cli.py", line 162, in main
  File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
  File "/home/opnfv/repos/yardstick/yardstick/benchmark/runners/duration.py", 
line 47, in _worker_process
self._target(*self._args, **self._kwargs)
self._dispa

[opnfv-tech-discuss] Fw: yardstick ping test -failed

2017-07-10 Thread Kalaivani R
​Hi,


can any one tel me how to fix the ssh error happening during ping test in 
yardstick.



log:

2017-07-10 07:01:49,034 yardstick.benchmark.runners.duration duration.py:43 
INFO worker START, duration 60 sec, class 
2017-07-10 07:01:49,034 yardstick.ssh ssh.py:115 DEBUG user:root 
host:172.18.0.228
2017-07-10 07:01:50,036 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:52,039 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:54,041 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:56,044 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:58,046 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:00,048 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:02,051 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:04,052 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:06,055 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:08,057 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:10,060 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:12,063 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:14,065 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:16,068 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:18,070 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:20,072 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:22,075 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:24,077 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:26,080 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:28,082 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:30,084 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
^CProcess Process-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Process Dumper:
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Traceback (most recent call last):
  File "/usr/local/bin/yardstick", line 11, in 
load_entry_point('yardstick', 'console_scripts', 'yardstick')()
  File "/home/opnfv/repos/yardstick/yardstick/main.py", line 49, in main
self.run()
self.run()
YardstickCLI().main(sys.argv[1:])
  File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
  File "/home/opnfv/repos/yardstick/yardstick/cmd/cli.py", line 162, in main
  File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
  File "/home/opnfv/repos/yardstick/yardstick/benchmark/runners/duration.py", 
line 47, in _worker_process
self._target(*self._args, **self._kwargs)
self._dispat

[opnfv-tech-discuss] yardstick ping test -failed

2017-07-10 Thread Kalaivani R
Hi,


can any one tel me how to fix the ssh error happening during ping test in 
yardstick.



log:

2017-07-10 07:01:49,034 yardstick.benchmark.runners.duration duration.py:43 
INFO worker START, duration 60 sec, class 
2017-07-10 07:01:49,034 yardstick.ssh ssh.py:115 DEBUG user:root 
host:172.18.0.228
2017-07-10 07:01:50,036 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:52,039 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:54,041 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:56,044 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:01:58,046 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:00,048 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:02,051 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:04,052 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:06,055 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:08,057 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:10,060 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:12,063 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:14,065 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:16,068 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:18,070 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:20,072 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:22,075 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:24,077 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:26,080 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:28,082 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
2017-07-10 07:02:30,084 yardstick.ssh ssh.py:320 DEBUG Ssh is still 
unavailable: SSHError("Exception  was raised during 
connect. Exception value is: timeout('timed out',)",)
^CProcess Process-2:
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Process Dumper:
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Traceback (most recent call last):
  File "/usr/local/bin/yardstick", line 11, in 
load_entry_point('yardstick', 'console_scripts', 'yardstick')()
  File "/home/opnfv/repos/yardstick/yardstick/main.py", line 49, in main
self.run()
self.run()
YardstickCLI().main(sys.argv[1:])
  File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
  File "/home/opnfv/repos/yardstick/yardstick/cmd/cli.py", line 162, in main
  File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
  File "/home/opnfv/repos/yardstick/yardstick/benchmark/runners/duration.py", 
line 47, in _worker_process
self._target(*self._args, **self._kwargs)
self._dispath

[opnfv-tech-discuss] Cancelled: OPNFV Doctor weekly meeting

2017-07-10 Thread Carlos Goncalves
BEGIN:VCALENDAR
PRODID://Yahoo//Calendar//EN
VERSION:2.0
METHOD:CANCEL
BEGIN:VEVENT
SUMMARY:OPNFV Doctor weekly meeting
DESCRIPTION:Doctor is a fault management and maintenance project to develop
  and realize the consequent implementation for the OPNFV reference platfor
 m.\n\n?   Doctor page: https://wiki.opnfv.org/doctor\n?   When: Ev
 ery Tuesday 6:00-7:00 PT (Your local time)\n?   Agenda: https://etherpad.o
 pnfv.org/p/doctor_meetings\n?   IRC channel: #opnfv-doctor @ Freenode 
 (Web Chat)\n?   To
  join: https://global.gotomeeting.com/join/819733085\n?   You can also
  dial in using your phone.\no   Access Code: 819-733-085\n\no   Un
 ited States +1 (312) 757-3126\no   Australia +61 2 8355 1040\no   
 Austria +43 7 2088 0034\no   Belgium +32 (0) 28 93 7018\no   Canad
 a +1 (647) 497-9350\no   Denmark +45 69 91 88 64\no   Finland +358
  (0) 923 17 0568\no   France +33 (0) 170 950 592\no   Germany +49 
 (0) 692 5736 7211\no   Ireland +353 (0) 15 360 728\no   Italy +39 
 0 247 92 13 01\no   Netherlands +31 (0) 208 080 219\no   New Zeala
 nd +64 9 280 6302\no   Norway +47 75 80 32 07\no   Spain +34 911 8
 2 9906\no   Sweden +46 (0) 853 527 836\no   Switzerland +41 (0) 43
 5 0167 13\no   United Kingdom +44 (0) 330 221 0086\n\n\nThanks\,\nThe 
 OPNFV Doctor team\n\n\n
CLASS:PUBLIC
DTSTART;TZID=Europe/London:20150120T14
DTEND;TZID=Europe/London:20150120T15
LOCATION:GoToMeeting
PRIORITY:0
SEQUENCE:13
STATUS:CONFIRMED
UID:04008200E00074C5B7101A82E00870C7A6EE7031D001000
 01000F6864C6EE62FD74FBF335584FBF65967
DTSTAMP:20161206T210551Z
ATTENDEE;PARTSTAT=NEEDS-ACTION;CN=opnfv-tech-discuss@lists.opnfv.org;ROLE=R
 EQ_PARTICIPANT;RSVP=TRUE:mailto:opnfv-tech-discuss@lists.opnfv.org
ATTENDEE;PARTSTAT=NEEDS-ACTION;CN=Fatemeh Abdollahei;ROLE=REQ_PARTICIPANT;R
 SVP=TRUE:mailto:f.abdolla...@yahoo.com
ORGANIZER;CN=Carlos Goncalves:mailto:carlos.goncal...@neclab.eu
RRULE:FREQ=WEEKLY;WKST=SU;INTERVAL=1;BYDAY=TU
EXDATE;TZID=Europe/London:20151027T06,20151110T06,20160315T06,2
 0160426T07,20160621T07,20160823T07
X-MICROSOFT-CDO-APPT-SEQUENCE:13
X-MICROSOFT-CDO-OWNERAPPTID:1571821535
X-MICROSOFT-CDO-BUSYSTATUS:TENTATIVE
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-INSTTYPE:1
X-MICROSOFT-DISALLOW-COUNTER:FALSE
X-YAHOO-YID:yahoo.calendar.acl.writer
X-YAHOO-YID:yahoo.calendar.acl.writer
TRANSP:OPAQUE
STATUS:TENTATIVE
X-YAHOO-USER-STATUS:TENTATIVE
X-YAHOO-EVENT-STATUS:BUSY
BEGIN:VALARM
ACTION:DISPLAY
DESCRIPTION:
TRIGGER;RELATED=START:-PT15M
END:VALARM
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/London
TZURL:http://tzurl.org/zoneinfo/Europe/London
X-LIC-LOCATION:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19810329T01
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+
TZNAME:GMT
DTSTART:19961027T02
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
BEGIN:STANDARD
TZOFFSETFROM:-000115
TZOFFSETTO:+
TZNAME:GMT
DTSTART:18471201T00
RDATE:18471201T00
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19160521T02
RDATE:19160521T02
RDATE:19170408T02
RDATE:19180324T02
RDATE:19190330T02
RDATE:19200328T02
RDATE:19210403T02
RDATE:19220326T02
RDATE:19230422T02
RDATE:19240413T02
RDATE:19250419T02
RDATE:19260418T02
RDATE:19270410T02
RDATE:19280422T02
RDATE:19290421T02
RDATE:19300413T02
RDATE:19310419T02
RDATE:19320417T02
RDATE:19330409T02
RDATE:19340422T02
RDATE:19350414T02
RDATE:19360419T02
RDATE:19370418T02
RDATE:19380410T02
RDATE:19390416T02
RDATE:19400225T02
RDATE:19460414T02
RDATE:19470316T02
RDATE:19480314T02
RDATE:19490403T02
RDATE:19500416T02
RDATE:19510415T02
RDATE:19520420T02
RDATE:19530419T02
RDATE:19540411T02
RDATE:19550417T02
RDATE:19560422T02
RDATE:19570414T02
RDATE:19580420T02
RDATE:19590419T02
RDATE:19600410T02
RDATE:19610326T02
RDATE:19620325T02
RDATE:19630331T02
RDATE:19640322T02
RDATE:19650321T02
RDATE:19660320T02
RDATE:19670319T02
RDATE:19680218T02
RDATE:19720319T02
RDATE:19730318T02
RDATE:19740317T02
RDATE:19750316T02
RDATE:19760321T02
RDATE:19770320T02
RDATE:19780319T02
RDATE:19790318T02
RDATE:19800316T02
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+
TZNAME:GMT
DTSTART:19161001T03
RDATE:19161001T03
RDATE:19170917T03
RDATE:19180930T03
RDATE:19190929T03
RDATE:19201025T03
RDATE:19211003T03
RDATE:19221008T03
RDATE:19230916T03
RDATE:19240921T03
RDATE:19251004T030

[opnfv-tech-discuss] Cancelled: OPNFV Doctor weekly meeting

2017-07-10 Thread Carlos Goncalves
BEGIN:VCALENDAR
PRODID://Yahoo//Calendar//EN
VERSION:2.0
METHOD:CANCEL
BEGIN:VEVENT
SUMMARY:OPNFV Doctor weekly meeting
DESCRIPTION:Doctor is a fault management and maintenance project to develop
  and realize the consequent implementation for the OPNFV reference platfor
 m.\n\n?   Doctor page: https://wiki.opnfv.org/doctor\n?   When: Ev
 ery Tuesday 6:00-7:00 PT (Your local time)\n?   Agenda: https://etherpad.o
 pnfv.org/p/doctor_meetings\n?   IRC channel: #opnfv-doctor @ Freenode 
 (Web Chat)\n?   To
  join: https://global.gotomeeting.com/join/819733085\n?   You can also
  dial in using your phone.\no   Access Code: 819-733-085\n\no   Un
 ited States +1 (312) 757-3126\no   Australia +61 2 8355 1040\no   
 Austria +43 7 2088 0034\no   Belgium +32 (0) 28 93 7018\no   Canad
 a +1 (647) 497-9350\no   Denmark +45 69 91 88 64\no   Finland +358
  (0) 923 17 0568\no   France +33 (0) 170 950 592\no   Germany +49 
 (0) 692 5736 7211\no   Ireland +353 (0) 15 360 728\no   Italy +39 
 0 247 92 13 01\no   Netherlands +31 (0) 208 080 219\no   New Zeala
 nd +64 9 280 6302\no   Norway +47 75 80 32 07\no   Spain +34 911 8
 2 9906\no   Sweden +46 (0) 853 527 836\no   Switzerland +41 (0) 43
 5 0167 13\no   United Kingdom +44 (0) 330 221 0086\n\n\nThanks\,\nThe 
 OPNFV Doctor team\n\n\n
CLASS:PUBLIC
DTSTART;TZID=Europe/London:20150120T14
DTEND;TZID=Europe/London:20150120T15
LOCATION:GoToMeeting
PRIORITY:0
SEQUENCE:13
STATUS:CONFIRMED
UID:04008200E00074C5B7101A82E00870C7A6EE7031D001000
 01000F6864C6EE62FD74FBF335584FBF65967
DTSTAMP:20161206T210551Z
ATTENDEE;PARTSTAT=NEEDS-ACTION;CN=opnfv-tech-discuss@lists.opnfv.org;ROLE=R
 EQ_PARTICIPANT;RSVP=TRUE:mailto:opnfv-tech-discuss@lists.opnfv.org
ATTENDEE;PARTSTAT=NEEDS-ACTION;CN=wonstonx;ROLE=REQ_PARTICIPANT;RSVP=TRUE:m
 ailto:wonst...@yahoo.com
ORGANIZER;CN=Carlos Goncalves:mailto:carlos.goncal...@neclab.eu
RRULE:FREQ=WEEKLY;WKST=SU;INTERVAL=1;BYDAY=TU
EXDATE;TZID=Europe/London:20151027T06,20151110T06,20160315T06,2
 0160426T07,20160621T07,20160823T07
X-MICROSOFT-CDO-APPT-SEQUENCE:13
X-MICROSOFT-CDO-OWNERAPPTID:1571821535
X-MICROSOFT-CDO-BUSYSTATUS:TENTATIVE
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-INSTTYPE:1
X-MICROSOFT-DISALLOW-COUNTER:FALSE
X-YAHOO-YID:yahoo.calendar.acl.writer
X-YAHOO-YID:yahoo.calendar.acl.writer
TRANSP:OPAQUE
STATUS:TENTATIVE
X-YAHOO-USER-STATUS:TENTATIVE
X-YAHOO-EVENT-STATUS:BUSY
BEGIN:VALARM
ACTION:DISPLAY
DESCRIPTION:
TRIGGER;RELATED=START:-PT15M
END:VALARM
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/London
TZURL:http://tzurl.org/zoneinfo/Europe/London
X-LIC-LOCATION:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19810329T01
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+
TZNAME:GMT
DTSTART:19961027T02
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
BEGIN:STANDARD
TZOFFSETFROM:-000115
TZOFFSETTO:+
TZNAME:GMT
DTSTART:18471201T00
RDATE:18471201T00
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19160521T02
RDATE:19160521T02
RDATE:19170408T02
RDATE:19180324T02
RDATE:19190330T02
RDATE:19200328T02
RDATE:19210403T02
RDATE:19220326T02
RDATE:19230422T02
RDATE:19240413T02
RDATE:19250419T02
RDATE:19260418T02
RDATE:19270410T02
RDATE:19280422T02
RDATE:19290421T02
RDATE:19300413T02
RDATE:19310419T02
RDATE:19320417T02
RDATE:19330409T02
RDATE:19340422T02
RDATE:19350414T02
RDATE:19360419T02
RDATE:19370418T02
RDATE:19380410T02
RDATE:19390416T02
RDATE:19400225T02
RDATE:19460414T02
RDATE:19470316T02
RDATE:19480314T02
RDATE:19490403T02
RDATE:19500416T02
RDATE:19510415T02
RDATE:19520420T02
RDATE:19530419T02
RDATE:19540411T02
RDATE:19550417T02
RDATE:19560422T02
RDATE:19570414T02
RDATE:19580420T02
RDATE:19590419T02
RDATE:19600410T02
RDATE:19610326T02
RDATE:19620325T02
RDATE:19630331T02
RDATE:19640322T02
RDATE:19650321T02
RDATE:19660320T02
RDATE:19670319T02
RDATE:19680218T02
RDATE:19720319T02
RDATE:19730318T02
RDATE:19740317T02
RDATE:19750316T02
RDATE:19760321T02
RDATE:19770320T02
RDATE:19780319T02
RDATE:19790318T02
RDATE:19800316T02
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+
TZNAME:GMT
DTSTART:19161001T03
RDATE:19161001T03
RDATE:19170917T03
RDATE:19180930T03
RDATE:19190929T03
RDATE:19201025T03
RDATE:19211003T03
RDATE:19221008T03
RDATE:19230916T03
RDATE:19240921T03
RDATE:19251004T03
RDATE:1926

[opnfv-tech-discuss] Canceled: OPNFV Doctor weekly meeting

2017-07-10 Thread Carlos Goncalves
BEGIN:VCALENDAR
METHOD:CANCEL
PRODID:Microsoft Exchange Server 2010
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Pacific Standard Time
BEGIN:STANDARD
DTSTART:16010101T02
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=1SU;BYMONTH=11
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T02
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=2SU;BYMONTH=3
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
ORGANIZER;CN=Carlos Goncalves:MAILTO:carlos.goncal...@neclab.eu
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=opnfv-tech
 -disc...@lists.opnfv.org:MAILTO:opnfv-tech-discuss@lists.opnfv.org
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='Canio Cil
 lis':MAILTO:canio.cil...@de.ibm.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='Pierre Ly
 nch':MAILTO:ply...@ixiacom.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='Shobhan A
 yyadevaraSesha (sayyadev)':MAILTO:sayya...@cisco.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="'GUPTA, AL
 OK'":MAILTO:ag1...@att.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="'Vul, Alex
 '":MAILTO:alex@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='Alan McNa
 mee':MAILTO:alan...@openet.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='Andrew Ve
 itch':MAILTO:andrew.vei...@netcracker.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN="'Kedalagud
 de, Meghashree Dattatri'":MAILTO:meghashree.dattatri.kedalagu...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=Umar Faroo
 q:MAILTO:umar.far...@neclab.eu
DESCRIPTION;LANGUAGE=en-US:Doctor is a fault management and maintenance pro
 ject to develop and realize the consequent implementation for the OPNFV re
 ference platform.\n\n•   Doctor page: https://wiki.opnfv.org/doctor\
 n•   When: Every Tuesday 6:00-7:00 PT (Your local time)\n•   Agend
 a: https://etherpad.opnfv.org/p/doctor_meetings\n•   IRC channel: #o
 pnfv-doctor @ Freenode (Web Chat)\n•   To join: https://global.gotomeeting.com/join/819733
 085\n•   You can also dial in using your phone.\no   Access Code
 : 819-733-085\n\no   United States +1 (312) 757-3126\no   Australi
 a +61 2 8355 1040\no   Austria +43 7 2088 0034\no   Belgium +32 (0
 ) 28 93 7018\no   Canada +1 (647) 497-9350\no   Denmark +45 69 91 
 88 64\no   Finland +358 (0) 923 17 0568\no   France +33 (0) 170 95
 0 592\no   Germany +49 (0) 692 5736 7211\no   Ireland +353 (0) 15 
 360 728\no   Italy +39 0 247 92 13 01\no   Netherlands +31 (0) 208
  080 219\no   New Zealand +64 9 280 6302\no   Norway +47 75 80 32 
 07\no   Spain +34 911 82 9906\no   Sweden +46 (0) 853 527 836\no  
  Switzerland +41 (0) 435 0167 13\no   United Kingdom +44 (0) 330 2
 21 0086\n\n\nThanks\,\nThe OPNFV Doctor team\n\n\n
RRULE:FREQ=WEEKLY;INTERVAL=1;BYDAY=TU;WKST=SU
EXDATE;TZID=Pacific Standard Time:20151027T06,20151110T06,20160315T
 06,20160426T06,20160621T06,20160823T06,20161220T06,201
 61227T06,20170103T06,20170221T06,20170509T06,20170613T0600
 00
SUMMARY;LANGUAGE=en-US:Canceled: OPNFV Doctor weekly meeting
DTSTART;TZID=Pacific Standard Time:20150120T06
DTEND;TZID=Pacific Standard Time:20150120T07
UID:04008200E00074C5B7101A82E00870C7A6EE7031D001000
 01000F6864C6EE62FD74FBF335584FBF65967
CLASS:PUBLIC
PRIORITY:1
DTSTAMP:20170710T113841Z
TRANSP:OPAQUE
STATUS:CANCELLED
SEQUENCE:20
LOCATION;LANGUAGE=en-US:GoToMeeting
X-MICROSOFT-CDO-APPT-SEQUENCE:20
X-MICROSOFT-CDO-OWNERAPPTID:1571821535
X-MICROSOFT-CDO-BUSYSTATUS:FREE
X-MICROSOFT-CDO-INTENDEDSTATUS:FREE
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:2
X-MICROSOFT-CDO-INSTTYPE:1
X-MICROSOFT-DISALLOW-COUNTER:FALSE
END:VEVENT
BEGIN:VEVENT
SUMMARY:Canceled: OPNFV Doctor weekly meeting
DTSTART;TZID=Pacific Standard Time:20150616T06
DTEND;TZID=Pacific Standard Time:20150616T07
UID:04008200E00074C5B7101A82E00870C7A6EE7031D001000
 01000F6864C6EE62FD74FBF335584FBF65967
RECURRENCE-ID;TZID=Pacific Standard Time:20150616T00
CLASS:PUBLIC
PRIORITY:1
DTSTAMP:20170710T113841Z
TRANSP:OPAQUE
STATUS:CONFIRMED
SEQUENCE:20
LOCATION:GoToMeeting
X-MICROSOFT-CDO-APPT-SEQUENCE:20
X-MICROSOFT-CDO-OWNERAPPTID:1571821535
X-MICROSOFT-CDO-BUSYSTATUS:BUSY
X-MICROSOFT-CDO-INTENDEDSTATUS:FREE
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:2
X-MICROSOFT-CDO-INSTTYPE:1
X-MICROSOFT-DISALLOW-COUNTER:FALSE
END:VEVENT
BEGIN:VEVENT
SUMMARY:Canceled: OPNFV Doctor weekly meeting
DTSTART;TZID=Pacific Standard Time:20150623T06
DTEND;TZID=Pacific Standard Time:20150623T07
UID:04008200E00074C5B7101A82E00870C7A6EE7031D00100

Re: [opnfv-tech-discuss] OPNFV Danube SFC physical scenario deployment doubts

2017-07-10 Thread Manuel Buil
Hi Andres,

Unfortunately, that scenario is not supported in Danube. These are the
ones supported:

https://wiki.opnfv.org/display/SWREL/Danube+Scenario+Status

What statistics do you need from SFC? Maybe you can collect them in
another way.

When the sfc scenarios are successfully deployed, no VMs or SFC chains
exist.

Regards,
Manuel


On Fri, 2017-07-07 at 10:58 +0200,
andres.sanchez.ra...@estudiant.upc.edu wrote:
> Hello,
> 
> My name is Andrés Sánchez, i am working on deploying the OPNFV HA  
> scenario on a laboratory on my university. I have been trying to  
> deploy the scenario manually but have been encountering some
> problems.  
> My actual environment consists of:
> 
> 5 Nodes: 2 CPUs, 8GB RAM.
> 1 Fuel master: 2 CPUs, 8GB RAM.
> 1 workstation: 2CPUs, 4GB RAM.
> 
> I am trying to deploy the  
> "ha_odl-l2_sfc_heat_ceilometer_scenario.yaml" on this lab, i have  
> previously deployed other Openstack scenarios on this lab so the  
> networking configuration is properly configured: on each Openstack  
> node port i assigned untagged traffic to PXE-admin, vlan 200 to
> public  
> network, and defaults to storage,management and private
> (102,101,103).  
> PXE-admin network is 192.168.100.0/24 and public network is  
> 172.16.10.0/24 with gateways being .1 respectively.
> 
> I already set up the fuel master with the Danube ISO  
> "opnfv-danube.1.0.iso". I previously correctly deployed an
> openstack  
> cluster manually installing ODL and tacker plugins, but then i
> found  
> that did not appear to be correctly configured and neither ODL (I  
> created a couple of VMs and could not connect to them). When
> creating  
> the environment which option is supposed to be selected: Neutron
> with  
> ML2 plugin & Neutron with tunneling segmentation or OpenDaylight
> with  
> tunneling segmentation. Another thing i found is that fuel did not  
> provide the option for adding Telemetry-Mongo DB in node assignment
> (i  
> am requiring to have Telemetry because i need to check the meters
> it  
> provides). I did check the box for ?Assign public network to all
> nodes?
> 
> I am trying to deploy now from my workstation but i have doubts  
> regarding the way the DHA-DEA files are supposed to be written, i
> am  
> interested in:
> A) Using the actual fuel master as fuel:
> 
> sudo bash ./deploy.sh -b file:///home/lab232/fuel/deploy/config -f
> -l  
> devel-pipeline -p lab232 -s  
> ha_odl-l2_sfc_heat_ceilometer_scenario.yaml -i  
> file:///home/lab232/opnfv-danube.1.0.iso -e
> 
> B) Create a VM inside my workstation to host Fuel master and use
> all  
> the other nodes as Openstack nodes.
> 
> sudo bash ./deploy.sh -b file:///home/lab232/fuel/deploy/config -F
> -l  
> devel-pipeline -p lab232 -s  
> ha_odl-l2_sfc_heat_ceilometer_scenario.yaml -i  
> file:///home/lab232/opnfv-danube.1.0.iso -e
> 
> For both scenarios i have encountered several issues an errors, i
> am  
> attaching the configuration files i use. Could you please provide
> some  
> guidance in the correct way to write these files.
> 
> When i try to deploy scenario A i am finding ipmi adapter errors:
> Exception: Address lookup for None failed
> Could not open socket!
> Error: Unable to establish IPMI v2 / RMCP+ session
> 
> When i try to deploy scenario B i am finding: "Exception: Device  
> "pxebr" does not exist." I am guessing i need to declare a bridge  
> called pxebr on my host machine, please confirm.
> 
> I am attaching my Dea/Dha files!
> 
> One final question: when the environment is correctly deployed is  
> openstack supposed to have some VMs declared? is ODL supposed to
> have  
> any SFC paths or something declared?
> 
> Thank you in advance for your help!
> 
> Best regards,
> 
> ___
> opnfv-tech-discuss mailing list
> opnfv-tech-discuss@lists.opnfv.org
> https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [barometer] Weekly Call

2017-07-10 Thread Tahhan, Maryam
BEGIN:VCALENDAR
METHOD:REQUEST
PRODID:Microsoft Exchange Server 2010
VERSION:2.0
BEGIN:VTIMEZONE
TZID:GMT Standard Time
BEGIN:STANDARD
DTSTART:16010101T02
TZOFFSETFROM:+0100
TZOFFSETTO:+
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=-1SU;BYMONTH=10
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T01
TZOFFSETFROM:+
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=-1SU;BYMONTH=3
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
ORGANIZER;CN="Tahhan, Maryam":MAILTO:maryam.tah...@intel.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN=opnfv-tec
 h-disc...@lists.opnfv.org:MAILTO:opnfv-tech-discuss@lists.opnfv.org
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Power, Da
 mien":MAILTO:damien.po...@intel.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Mcmahon, 
 Tony B":MAILTO:tony.b.mcma...@intel.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Campbell,
  Wesley":MAILTO:wesley.campb...@intel.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Ray, Bj":M
 AILTO:bj@intel.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Duignan, 
 Andrew":MAILTO:andrew.duig...@intel.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Parker, D
 aniel":MAILTO:daniel.par...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Mario To
 rrecillas Rodriguez':MAILTO:mario.rodrig...@arm.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Chornyi, 
 TarasX":MAILTO:tarasx.chor...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="'GUPTA, A
 LOK'":MAILTO:ag1...@att.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="'MORTON, 
 ALFRED C (AL)'":MAILTO:acmor...@att.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Lawrence
  Lamers':MAILTO:ljlam...@vmware.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Shobhan 
 AyyadevaraSesha (sayyadev)':MAILTO:sayya...@cisco.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Seiler, G
 lenn (Wind River)":MAILTO:glenn.sei...@windriver.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Scott Ma
 nsfield':MAILTO:scott.mansfi...@ericsson.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="'Bernier,
  Daniel (520165)'":MAILTO:daniel.bern...@bell.ca
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Canio Ci
 llis':MAILTO:canio.cil...@de.ibm.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="'Tomasini
 , Lorenzo'":MAILTO:lorenzo.tomas...@fokus.fraunhofer.de
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Srinivas
 a Chaganti':MAILTO:schaga...@versa-networks.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="'Laporte,
  Laurent [CTO]'":MAILTO:laurent.lapo...@sprint.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Ola Lilj
 edahl':MAILTO:ola.liljed...@arm.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="'Jaggi, M
 anish'":MAILTO:manish.ja...@cavium.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Gabor Ha
 lász':MAILTO:gabor.hal...@ericsson.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='杨艳':
 MAILTO:yangya...@chinamobile.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="'SULLIVAN
 , BRYAN L'":MAILTO:bs3...@att.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Andrew V
 eitch':MAILTO:andrew.vei...@netcracker.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="'DRUTA, D
 AN'":MAILTO:dd5...@att.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Javier A
 rauz':MAILTO:j.javier.ar...@gmail.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="'Sen, Pro
 dip'":MAILTO:prodip@hpe.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Alan McN
 amee':MAILTO:alan...@openet.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Kedalagud
 de, Meghashree Dattatri":MAILTO:meghashree.dattatri.kedalagu...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Gherghe, 
 Calin":MAILTO:calin.gher...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Murthy, K
 rishna J":MAILTO:krishna.j.mur...@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Pierre L
 ynch':MAILTO:ply...@ixiacom.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Sun, Ning"
 :MAILTO:ning@intel.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='rgrigar@
 linuxfoundation.org':MAILTO:rgri...@linuxfoundation.org
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN='Marco Va
 rlese':MAILTO:marco.varl...@suse.com
ATTENDEE;ROLE=OPT-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE;CN="Reddy, Ra
 ghuveer":MAILTO:raghuveer.re...

Re: [opnfv-tech-discuss] [multisite]stepping down from PTL

2017-07-10 Thread joehuang
Thank you Howard. You helped a lot in mutli-site project, especially in the 
creation of the project.

Best Regards
Chaoyi Huang (joehuang)

From: Zhipeng Huang [zhipengh...@gmail.com]
Sent: 10 July 2017 14:13
To: joehuang
Cc: opnfv-tech-discuss
Subject: Re: [opnfv-tech-discuss] [multisite]stepping down from PTL

Thanks Joe,

You have lead an incredible effort in building the multisite project and it was 
a great journey for people like us who were among the first participants.



On Mon, Jul 10, 2017 at 9:04 AM, joehuang 
mailto:joehu...@huawei.com>> wrote:

Hello,



Nice to work with you in Multi-site and OPNFV from 2015, there are still lots 
of interesting staff to do in multi-site area, just as what we discussed in the 
thread "multi-site next 
step?"(https://lists.opnfv.org/pipermail/opnfv-tech-discuss/2017-June/016733.html).



Unfortunately, I am not able to continue multi-site PTL role, due to my time is 
occupied mostly by non-open source activities. I would like to say thanks to 
all of you, and step down from multi-site PTL.



Just as what we discussed in the thread "multi-site next step?", the inital 
goal of multi-site project is to identify and fill the gaps in OpenStack to 
fullfill multi-site requirements, most of them have been covered by respective 
projects, especially we have solution to support mission critical application 
high availability in multi-site (across multiple OpenStack cloud) through the 
help of Tricircle project, though the integration in OPNFV can be done by 
succedded contributions.



PTL candidates are welcome to continue the project, if there is no PTL 
candidate stepping up in two weeks, it's fine to ask for the termination of 
multi-site project, becasue the project has reached its initial goal.



New project or installer projects can do the integration and test for various 
multi-site requirements, I'll try my best to review multi-site relative patches 
if needed, especially if you need help for Tricircle integration, or you can 
ask for help from Tricircle team in OpenStack community.





Best Regards
Chaoyi Huang (joehuang)

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss




--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] Multiple docker containers from one project

2017-07-10 Thread Cedric OLLIVIER
I'm sorry I don't understand the point of git clone.
Here we simply install Functest via the python package.
Pip install does a local copy because it's not published in PyPI yet and
then removes it after installing the package.

Why should we clone again the repository?

Cédric

2017-07-10 3:10 GMT+02:00 Beierl, Mark :

> Why should we avoid copy?  Why do a git clone of the existing git clone?
> Almost every dockerfile example I have seen uses copy, not a second got
> checkout of the same code.
>
> Regards,
> Mark
>
> *Mark Beierl*
> SW System Sr Principal Engineer
> *Dell **EMC* | Office of the CTO
> mobile +1 613 314 8106 <1-613-314-8106>
> mark.bei...@dell.com
>
> On Jul 9, 2017, at 21:00, Cedric OLLIVIER 
> wrote:
>
> No we cannot (parent directory) and we should mostly avoid copying files
> (except for configurations).
>
> For instance, you could have a look to https://gerrit.opnfv.org/
> gerrit/#/c/36963/.
> All Dockerfiles simply download Alpine packages, python packages (Functest
> + its dependencies) and upper constraints files.
> testcases.yaml is copied from host as it differs between our containers
> (smoke, healthcheck...).
>
> Cédric
>
>
>
> 2017-07-10 1:25 GMT+02:00 Beierl, Mark :
>
>> My only concern is for dockerfiles that do a "COPY . /home" in them. That
>> means all the code would be located under the docker directory.
>>
>> I suppose multiple ../ paths can be used instead.
>>
>> Regards,
>> Mark
>>
>> *Mark Beierl*
>> SW System Sr Principal Engineer
>> *Dell **EMC* | Office of the CTO
>> mobile +1 613 314 8106 <1-613-314-8106>
>> mark.bei...@dell.com
>>
>> On Jul 9, 2017, at 19:03, Julien  wrote:
>>
>> Hi Cédric,
>>
>> Patch in  https://gerrit.opnfv.org/gerrit/#/c/36963/ is exact what I
>> mean. Let's collect the opinions from the releng team.
>>
>> Julien
>>
>>
>>
>> Cedric OLLIVIER 于2017年7月10日周一 上午4:15写道:
>>
>>> Hello,
>>>
>>> Please see https://gerrit.opnfv.org/gerrit/#/c/36963/ which introduces
>>> several containers for Functest too.
>>> I think the tree conforms with the previous requirements.
>>>
>>> Automating builds on Docker Hub is a good solution too.
>>>
>>> Cédric
>>>
>>> 2017-07-09 12:10 GMT+02:00 Julien :
>>>
 Hi Jose,

 According to the current implementation, current script only support
 one Dockerfile, my personal suggestion is:
 1. list all the sub-directory which contains "Dockerfile"
 2. build for each sub-directory fetched in #1
 3. for the names, in the top directory using the project name, in the
 sub-directory using: project_name-sub_directory_name
 not too much changes for current script and easy for project to manage.

 /Julien

 Beierl, Mark 于2017年7月7日周五 下午11:35写道:

> Hello,
>
> Having looked over the docker-hub build service, I also think this
> might be the better approach.  Less code for us to maintain, and the merge
> job from OPNFV Jenkins can use the web hook to remotely trigger the job on
> docker-hub.
>
> Who has the opnfv credentials for docker-hub, and the credentials for
> the GitHub mirror that can set this up?  Is that the LF Helpdesk?
>
> Regards,
> Mark
>
> *Mark Beierl*
> SW System Sr Principal Engineer
> *Dell **EMC* | Office of the CTO
> mobile +1 613 314 8106 <1-613-314-8106>
> mark.bei...@dell.com
>
> On Jul 7, 2017, at 11:01, Xuan Jia  wrote:
>
> +1 Using build service from docker-hub
>
>
> On Thu, Jul 6, 2017 at 11:42 PM, Yujun Zhang (ZTE) <
> zhangyujun+...@gmail.com> wrote:
>
>> Does anybody consider using the build service from docker-hub[1] ?
>>
>> It supports multiple Dockerfile from same repository and easy to
>> integrate with OPNFV Github mirror.
>>
>> [1]: https://docs.docker.com/docker-hub/builds/
>>
>>
>> On Thu, Jul 6, 2017 at 11:02 PM Jose Lausuch <
>> jose.laus...@ericsson.com> wrote:
>>
>>> Hi Mark,
>>>
>>>
>>>
>>> I would incline for option 1), it sounds better than searching for a
>>> file. We could define specific values of DOCKERFILE var for each 
>>> project.
>>>
>>>
>>>
>>> /Jose
>>>
>>>
>>>
>>>
>>>
>>> *From:* Beierl, Mark [mailto:mark.bei...@dell.com]
>>> *Sent:* Thursday, July 06, 2017 16:18 PM
>>> *To:* opnfv-tech-discuss@lists.opnfv.org
>>> *Cc:* Julien ; Fatih Degirmenci <
>>> fatih.degirme...@ericsson.com>; Jose Lausuch <
>>> jose.laus...@ericsson.com>
>>> *Subject:* Re: [opnfv-tech-discuss] Multiple docker containers from
>>> one project
>>>
>>>
>>>
>>> Ideas:
>>>
>>>
>>>
>>>- Change the DOCKERFILE parameter in releng jjb so that it can
>>>accept a comma delimited list of Dockerfile names and paths.  Problem
>>>with this, of course, is how do I default it to be different for 
>>> StorPerf
>>>vs. Functest, etc?
>>>- Change the opnfv-docker.