Re: New OpenStack instance - status

2015-03-09 Thread Kevin Fenzi
On Mon, 09 Mar 2015 13:48:49 +0100
Miroslav Suchý msu...@redhat.com wrote:

 On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
  * We will need to adapt to not giving every instance a floating ip.
  For copr, I think this would be fine, as you don't care that they
  have
 
 *nod* I was not sure how VM behave when does not have public IP so I
 tested it. It is basicaly behind NAT and all internet is accessible.
 Therefore yes, Copr builders do not need floating ip.

Right. In fact it's nicer as they are no longer exposed on the net at
all.

 However this instance of OpenStack behave differently from the old
 one. When you start up VM, you do not get public IP automatically.

Yes. We changed that behavior deliberately. We thought it would be good
to make sure all instances got an external floating ip. In retrospect
this just caused us problems, so I think the default behavior in the
new cloud is better. It does mean we may need to adjust some ansible
scripts to make sure they request a floating ip once we move things
over. 

kevin




pgpb8SjGYin6m.pgp
Description: OpenPGP digital signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: New OpenStack instance - status

2015-03-09 Thread Fabio Alessandro Locati
Hi guys :),

On Mon, Mar 9, 2015 at 2:39 PM, Miroslav Suchý msu...@redhat.com wrote:

 So it would be:
   # 172.16.0.1/16 -- 172.21.0.1/20- Free to take
   # 172.23.0.1/16 - free (but used by old cloud)
   # 172.24.0.1/24 - RESERVED it is used internally for OS
   # 172.25.0.1/20  - Cloudintern (172.25.0.1 - 172.25.15.254)
   # 172.25.16.1/20 - infrastructure (172.25.16.1 - 172.25.31.254)
   # 172.25.32.1/20 - persistent (172.25.32.1 - 172.25.47.254)
   # 172.25.48.1/20 - transient (172.25.48.1 - 172.25.63.254)
   # 172.25.64.1/20 - scratch (172.25.64.1 - 172.25.79.254)
   # 172.25.80.1/20 - copr (172.25.80.1 - 172.25.95.254)
   # 172.25.96.1/20 -- 172.25.240.1/20 - free
   # 172.26.0.1/16 -- 172.31.0.1/16 - free


 Comments?



Seems like you forgot the 172.22.0.1/16 class and also on the 172.21.0.1/16
class, having put a /20 on the 172.21.0.1, you are leaving behind the
classes from the 172.21.16.1/20 to the 172.21.240.1/20.

Fabio

-- 
Fabio Alessandro Locati

PGP Fingerprint: B960 BE9D E7A8 FA12 273A  98BB 6D6A 29D6 709A 7851
https://keybase.io/fale
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: New OpenStack instance - status

2015-03-09 Thread Kevin Fenzi
On Mon, 09 Mar 2015 14:39:52 +0100
Miroslav Suchý msu...@redhat.com wrote:

 
 So it would be:
   # 172.16.0.1/16 -- 172.21.0.1/20- Free to take
   # 172.23.0.1/16 - free (but used by old cloud)
   # 172.24.0.1/24 - RESERVED it is used internally for OS
   # 172.25.0.1/20  - Cloudintern (172.25.0.1 - 172.25.15.254)
   # 172.25.16.1/20 - infrastructure (172.25.16.1 - 172.25.31.254)
   # 172.25.32.1/20 - persistent (172.25.32.1 - 172.25.47.254)
   # 172.25.48.1/20 - transient (172.25.48.1 - 172.25.63.254)
   # 172.25.64.1/20 - scratch (172.25.64.1 - 172.25.79.254)
   # 172.25.80.1/20 - copr (172.25.80.1 - 172.25.95.254)
   # 172.25.96.1/20 -- 172.25.240.1/20 - free
   # 172.26.0.1/16 -- 172.31.0.1/16 - free
 
 
 Comments?

Sounds good to me. 

When we migrate from old-new we are going to have to deal with the
floating ips. I guess we could make the new openstack have the entire
range, then move those instances that expect to be at specific ips (and
they can claim them), then move the rest and give them just 'next ip'
in the external. 

Also, I'd like to reserve some externals for the other cloud. ie, once
we move to this new cloud, I want to keep say fed-cloud01/02 out and
redo them with juno or kilo or whatever so we can more quickly move to
a new cloud version if needed. 

I guess that should be something like: 

209.132.184:

.1 to .25 reserved for hardware nodes
26 to 30 reserved for 'test openstack' 
31-250 reserved for 'production openstack'

(and of course some instances may have specific ips in the production
range).

kevin


pgpM6O0Idq8lj.pgp
Description: OpenPGP digital signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: New OpenStack instance - status

2015-03-09 Thread Miroslav Suchý
On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
 * We will need to adapt to not giving every instance a floating ip. For
   copr, I think this would be fine, as you don't care that they have

*nod* I was not sure how VM behave when does not have public IP so I tested it.
It is basicaly behind NAT and all internet is accessible.
Therefore yes, Copr builders do not need floating ip.

However this instance of OpenStack behave differently from the old one. When 
you start up VM, you do not get public IP
automatically.

-- 
Miroslav Suchy, RHCE, RHCDS
Red Hat, Senior Software Engineer, #brno, #devexp, #fedora-buildsys
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: New OpenStack instance - status

2015-03-09 Thread Kevin Fenzi
On Mon, 09 Mar 2015 10:29:36 +0100
Miroslav Suchý msu...@redhat.com wrote:

 On 03/07/2015 07:29 PM, Kevin Fenzi wrote:
  * I see that the tenants have the same internal 172.16.0.0 net right
now, can we make sure we seperate them from each other? ie, I
  don't want a infrastructure instance being able to talk to a copr
  builder if we can avoid it. 
 
 Are you sure?
 From: playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml
   # 172.16.0.1/12 -- 172.21.0.1/12 - Free to take
   # 172.23.0.1/12 - free (but used by old cloud)
   # 172.24.0.1/12 - RESERVED it is used internally for OS
   # 172.25.0.1/12 - Cloudintern
   # 172.26.0.1/12 - infrastructure
   # 172.27.0.1/12 - persistent
   # 172.28.0.1/12 - transient
   # 172.29.0.1/12 - scratch
   # 172.30.0.1/12 - copr
   # 172.31.0.1/12 - Free to take
 And checking dashboard I see infra in .26 network and copr in .16.
 Hmm that is different one, but copr should have .30. Playbook seems
 to be correct. Strange.

Yeah, I saw those comments, was looking at the dashboard: 

https://fed-cloud09.cloud.fedoraproject.org/dashboard/admin/networks/

login as admin and see that page... 

copr-subnet 172.16.0.0/12 
infrastructure-subnet 172.16.0.0/12 

Not sure if thats just because they are all in the same /12?

  * Do we want to also revisit flavors available? Perhaps drop the
builder one and just use m1.large for it? we should have
  resources to use more cpus/mem and should make copr builds
  faster/better. 
 
 80GB is too much, and 4 VCPU too. I think having extra flavor for
 builder is nice as we can change it any time without affecting other
 instances/tenants.

ok. I think more cpus (to make builds faster in many cases) would still
be welcome thought. As well as more memory. Disk I don't think matters
as much. 

  * Is there any way to see how much space is available on the
  equalogics aside from just logging into it via ssh?
 
 Unfortunately no.
 I reported it as RFE some time ago.
   https://bugs.launchpad.net/cinder/+bug/1380555
 You can only amount of used space using cinder list  cinder show
 volume-id

ok.

kevin



pgpBEVitokEoJ.pgp
Description: OpenPGP digital signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: New OpenStack instance - status

2015-03-09 Thread Miroslav Suchý
On 03/09/2015 01:00 PM, Kevin Fenzi wrote:
 nova commands worked fine from here, but I didn't really try and do
 anything fancy. We could see if the euca stuff will just keep working
 for us for now. 

It works fine. It is just that if you miss some functionality (and I miss a 
lot) and file RFE, it will be likely
rejected that you should now use openstack command.

-- 
Miroslav Suchy, RHCE, RHCDS
Red Hat, Senior Software Engineer, #brno, #devexp, #fedora-buildsys
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: New OpenStack instance - status

2015-03-09 Thread Miroslav Suchý
On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
 * Can we adjust the default tennat quotas in the playbooks? They seem a
   bit low to me given the amount of resources we have. 

I put (and tested) the quota for Copr (it is on bottom of playbook).
Can you please write quotas for other tenants (or you can post it to me). I 
have no idea what are needs of those tenants.

-- 
Miroslav Suchy, RHCE, RHCDS
Red Hat, Senior Software Engineer, #brno, #devexp, #fedora-buildsys
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: New OpenStack instance - status

2015-03-09 Thread Kevin Fenzi
On Mon, 09 Mar 2015 11:25:20 +0100
Miroslav Suchý msu...@redhat.com wrote:

 On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
  All thats set and I can see console in the web dash again just fine
  for any of the instances I tried, and they are all https using
  only. 
 
 Works for me too. Nice. Thanks.

Cool. 

   I tried to automatize adding of SSH keys using this:
  I wonder if we shouldn't have something to update/upload everyones
  ssh keys. Might be handy but of course it's not a blocker/that
  important. We could even look at just tieing into our existing
  fedmsg listener (when someone with a cloud account changes ssh key,
  update the cloud). 
 
 Done. Search for upload SSH keys for users action.
 However it work only initially. Once user alter his password it will
 fail. I ignore those cases with ignore_errors: yes though.
 I have pending RFE for OpenStack so admin is able to upload ssh keys
 to user.
 
 I skipped (commented out) users:
   * twisted
   * cockpit
 as I do not know which ssh keys they use. Can somebody put there
 right values?

Will have to find out. Those groups aren't from fas... 
 
   Anyway, I am able (again) to start VM and log to those VM.
  Me too. I uploaded the F22 Alpha cloud image and it worked fine.
  (aside cloud-init taking about 35 seconds to run. It seemed to be
  timing out on some metadata ?)
  
  We should look at hooking our cloud image upload service into this
  soon so we can get images as soon as they are done.
 
 I will leave this one for somebody else.

Yeah, will ping oddshocks on it, but possibly wait until our final
re-install. 

  * Might be a good time to look at moving copr to f21? and builders
  also to be f21? (they should come up faster and in general be
  better than the el6 ones currently used, IMHO)
 
 I will start by moving builder to F21 (this really limit us) and once
 it will be finished I move backend and fronted. I'm afraid that by
 that time I will move them directly to F22 :)

Hopefully we can get there before then. ;) 

  * Right now ansible on lockbox01 is using euca2ools to manage cloud
instances, perhaps we could/should just move to nova now? Or this
could perhaps wait for us to move lockbox01 to rhel7. 
 
 I learned (the hard way) that nova/cider/neutron etc. commands are
 deprecated. The new preferred way is command openstack from
 python-openstackclient. However Icehouse use 0.3 version and you
 should not think about using this command unless you have 1.0 version
 available (Juno or Kilo, not sure). It probably does not matter if
 you use ansible modules, but you may consider it if you are calling
 commands directly. #justsaying

ok. We may have to do some trial and error. 

nova commands worked fine from here, but I didn't really try and do
anything fancy. We could see if the euca stuff will just keep working
for us for now. 

kevin


pgpXonUkKhRhf.pgp
Description: OpenPGP digital signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: New OpenStack instance - status

2015-03-09 Thread Miroslav Suchý
On 03/09/2015 10:29 AM, Miroslav Suchý wrote:
 On 03/07/2015 07:29 PM, Kevin Fenzi wrote:
  * I see that the tenants have the same internal 172.16.0.0 net right
now, can we make sure we seperate them from each other? ie, I don't
want a infrastructure instance being able to talk to a copr builder
if we can avoid it. 
 Are you sure?
 From: playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml
   # 172.16.0.1/12 -- 172.21.0.1/12 - Free to take
   # 172.23.0.1/12 - free (but used by old cloud)
   # 172.24.0.1/12 - RESERVED it is used internally for OS
   # 172.25.0.1/12 - Cloudintern
   # 172.26.0.1/12 - infrastructure
   # 172.27.0.1/12 - persistent
   # 172.28.0.1/12 - transient
   # 172.29.0.1/12 - scratch
   # 172.30.0.1/12 - copr
   # 172.31.0.1/12 - Free to take
 And checking dashboard I see infra in .26 network and copr in .16. Hmm that 
 is different one, but copr should have .30.
 Playbook seems to be correct. Strange.

Ah. Of course /12 is mistake. There should be /16.
However when I see that with /16 we have only 7 free subnets. I would rather 
use /20 subnets, which would give us 4094
IPs per one subnet. That should be enough and it gives us plenty of subnets for 
use.


So it would be:
  # 172.16.0.1/16 -- 172.21.0.1/20- Free to take
  # 172.23.0.1/16 - free (but used by old cloud)
  # 172.24.0.1/24 - RESERVED it is used internally for OS
  # 172.25.0.1/20  - Cloudintern (172.25.0.1 - 172.25.15.254)
  # 172.25.16.1/20 - infrastructure (172.25.16.1 - 172.25.31.254)
  # 172.25.32.1/20 - persistent (172.25.32.1 - 172.25.47.254)
  # 172.25.48.1/20 - transient (172.25.48.1 - 172.25.63.254)
  # 172.25.64.1/20 - scratch (172.25.64.1 - 172.25.79.254)
  # 172.25.80.1/20 - copr (172.25.80.1 - 172.25.95.254)
  # 172.25.96.1/20 -- 172.25.240.1/20 - free
  # 172.26.0.1/16 -- 172.31.0.1/16 - free


Comments?


-- 
Miroslav Suchy, RHCE, RHCDS
Red Hat, Senior Software Engineer, #brno, #devexp, #fedora-buildsys
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: New OpenStack instance - status

2015-03-09 Thread Kevin Fenzi
On Mon, 09 Mar 2015 13:00:20 +0100
Miroslav Suchý msu...@redhat.com wrote:

 On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
  * Can we adjust the default tennat quotas in the playbooks? They
  seem a bit low to me given the amount of resources we have. 
 
 I put (and tested) the quota for Copr (it is on bottom of playbook).
 Can you please write quotas for other tenants (or you can post it to
 me). I have no idea what are needs of those tenants.

True, it could vary. 

Alright, lets just leave the rest default and we can adjust as we go. 

The new cloud should have a good deal more cpus and mem than the old
one, but we will also need to see if quota bugs are fixed (in old cloud
it would miscount things pretty badly sometimes). 

kevin



pgp4mbN3T0cPM.pgp
Description: OpenPGP digital signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: Freeze Break Request: Add more fedmsg endpoints for fedimg

2015-03-09 Thread David Gay


- Original Message -
 From: Ralph Bean rb...@redhat.com
 To: infrastructure@lists.fedoraproject.org
 Sent: Monday, March 9, 2015 10:59:54 AM
 Subject: Freeze Break Request:  Add more fedmsg endpoints for fedimg
 
 
 The fedimg uploader daemon that uploads cloud images to AWS is
 complaining that it doesn't have enough fedmsg endpoints.
 
 It needs an endpoint for each worker thread, one for the parent thread, and
 then twice that number so that the daemon can do work while an admin can run
 the commands by hand at the same time (otherwise, the daemon claims all the
 ports and the manual commands fail).
 
 Can I get two +1s to apply this and push it out to all our hosts?
 
 
 diff --git a/roles/fedmsg/base/templates/endpoints-fedimg.py.j2
 b/roles/fedmsg/base/templates/endpoints-fedimg.py.j2
 index b13f3a7..5a4fb9d 100644
 --- a/roles/fedmsg/base/templates/endpoints-fedimg.py.j2
 +++ b/roles/fedmsg/base/templates/endpoints-fedimg.py.j2
 @@ -4,11 +4,15 @@ suffix  = 'stg.phx2.fedoraproject.org'
  suffix = 'phx2.fedoraproject.org'
  {% endif %}
 
 +primary_threads = 4
 +atomic_threads = 2
 +NUM_FEDIMG_PORTS = 2 * ((primary_threads + atomic_threads) + 1)
 +
  config = dict(
  endpoints={
  fedimg.fedimg01: [
  tcp://fedimg01.%s:30%0.2i % (suffix, i)
 -for i in range(4)
 +for i in range(NUM_FEDIMG_PORTS)
  ],
  },
  )
 
 ___
 infrastructure mailing list
 infrastructure@lists.fedoraproject.org
 https://admin.fedoraproject.org/mailman/listinfo/infrastructure

+1 from me
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: Freeze Break Request: Add more fedmsg endpoints for fedimg

2015-03-09 Thread Stephen John Smoogen
+1 from me
On Mar 9, 2015 12:02 PM, David Gay d...@redhat.com wrote:



 - Original Message -
  From: Ralph Bean rb...@redhat.com
  To: infrastructure@lists.fedoraproject.org
  Sent: Monday, March 9, 2015 10:59:54 AM
  Subject: Freeze Break Request:  Add more fedmsg endpoints for fedimg
 
 
  The fedimg uploader daemon that uploads cloud images to AWS is
  complaining that it doesn't have enough fedmsg endpoints.
 
  It needs an endpoint for each worker thread, one for the parent thread,
 and
  then twice that number so that the daemon can do work while an admin can
 run
  the commands by hand at the same time (otherwise, the daemon claims all
 the
  ports and the manual commands fail).
 
  Can I get two +1s to apply this and push it out to all our hosts?
 
 
  diff --git a/roles/fedmsg/base/templates/endpoints-fedimg.py.j2
  b/roles/fedmsg/base/templates/endpoints-fedimg.py.j2
  index b13f3a7..5a4fb9d 100644
  --- a/roles/fedmsg/base/templates/endpoints-fedimg.py.j2
  +++ b/roles/fedmsg/base/templates/endpoints-fedimg.py.j2
  @@ -4,11 +4,15 @@ suffix  = 'stg.phx2.fedoraproject.org'
   suffix = 'phx2.fedoraproject.org'
   {% endif %}
 
  +primary_threads = 4
  +atomic_threads = 2
  +NUM_FEDIMG_PORTS = 2 * ((primary_threads + atomic_threads) + 1)
  +
   config = dict(
   endpoints={
   fedimg.fedimg01: [
   tcp://fedimg01.%s:30%0.2i % (suffix, i)
  -for i in range(4)
  +for i in range(NUM_FEDIMG_PORTS)
   ],
   },
   )
 
  ___
  infrastructure mailing list
  infrastructure@lists.fedoraproject.org
  https://admin.fedoraproject.org/mailman/listinfo/infrastructure

 +1 from me
 ___
 infrastructure mailing list
 infrastructure@lists.fedoraproject.org
 https://admin.fedoraproject.org/mailman/listinfo/infrastructure
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: Freeze Break Request: Add more fedmsg endpoints for fedimg

2015-03-09 Thread Kevin Fenzi
Sure, +1

kevin


pgpxcvwJe9Sx0.pgp
Description: OpenPGP digital signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Freeze Break Request: Add more fedmsg endpoints for fedimg

2015-03-09 Thread Ralph Bean

The fedimg uploader daemon that uploads cloud images to AWS is
complaining that it doesn't have enough fedmsg endpoints.

It needs an endpoint for each worker thread, one for the parent thread, and
then twice that number so that the daemon can do work while an admin can run
the commands by hand at the same time (otherwise, the daemon claims all the
ports and the manual commands fail).

Can I get two +1s to apply this and push it out to all our hosts?


diff --git a/roles/fedmsg/base/templates/endpoints-fedimg.py.j2 
b/roles/fedmsg/base/templates/endpoints-fedimg.py.j2
index b13f3a7..5a4fb9d 100644
--- a/roles/fedmsg/base/templates/endpoints-fedimg.py.j2
+++ b/roles/fedmsg/base/templates/endpoints-fedimg.py.j2
@@ -4,11 +4,15 @@ suffix  = 'stg.phx2.fedoraproject.org'
 suffix = 'phx2.fedoraproject.org'
 {% endif %}

+primary_threads = 4
+atomic_threads = 2
+NUM_FEDIMG_PORTS = 2 * ((primary_threads + atomic_threads) + 1)
+
 config = dict(
 endpoints={
 fedimg.fedimg01: [
 tcp://fedimg01.%s:30%0.2i % (suffix, i)
-for i in range(4)
+for i in range(NUM_FEDIMG_PORTS)
 ],
 },
 )


signature.asc
Description: PGP signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Freeze break request: Fix pxeboot menu labels

2015-03-09 Thread Ricky Elrod
commit 768e9bd0cc104585e112397109d3ed526678edd2
Author: Ricky Elrod codebl...@fedoraproject.org
Date:   Mon Mar 9 18:54:38 2015 +

Fedora 21 is not Fedora 22

Signed-off-by: Ricky Elrod codebl...@fedoraproject.org

diff --git
a/roles/tftp_server/files/default.noc01.phx2.fedoraproject.org
b/roles/tftp_server/files/default.noc01.phx2.fedoraproject.org
index f5b80a0..c56b115 100644
--- a/roles/tftp_server/files/default.noc01.phx2.fedoraproject.org
+++ b/roles/tftp_server/files/default.noc01.phx2.fedoraproject.org
@@ -61,17 +61,17 @@ LABEL Fed20-x86_64-novnc
  APPEND ks initrd=images/Fedora/20/x86_64/initrd.img
method=http://10.5.126.23/pub/fedora/linux/releases/20/Fedora/x86_64/os/
ip=dhcp ks=http://10.5.126.23/repo/rhel/ks/hardware-f20.cfg nomodeset

 LABEL Fed21-x86_64-buildhw
- MENU LABEL Fedora22-x86_64-buildhw
+ MENU LABEL Fedora21-x86_64-buildhw
  KERNEL images/Fedora/21/x86_64/vmlinuz
  APPEND ks initrd=images/Fedora/21/x86_64/initrd.img
method=http://10.5.126.23/pub/fedora/linux/releases/21/Server/x86_64/os/
ip=dhcp ks=http://10.5.126.23/repo/rhel/ks/buildhw-fedora-21 text
net.ifnames=0 biosdevname=0

 LABEL Fed21-ppc64
- MENU LABEL Fedora22-ppc64
+ MENU LABEL Fedora21-ppc64
  KERNEL images/Fedora/21/ppc64/vmlinuz
  APPEND ks initrd=images/Fedora/21/ppc64/initrd.img
method=http://10.5.126.23/pub/fedora-secondary/releases/21/Server/ppc64/os/
ip=dhcp net.ifnames=0 biosdevname=0 vnc

 LABEL Fed21-ppc64le
- MENU LABEL Fedora22-ppc64le
+ MENU LABEL Fedora21-ppc64le
  KERNEL images/Fedora/21/ppc64le/vmlinuz
  APPEND ks initrd=images/Fedora/21/ppc64le/initrd.img
method=http://10.5.126.23/pub/fedora-secondary/releases/21/Server/ppc64le/os/
ip=dhcp net.ifnames=0 biosdevname=0 vnc




signature.asc
Description: OpenPGP digital signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: Freeze break request: Fix pxeboot menu labels

2015-03-09 Thread Kevin Fenzi
+1 here


pgpTGF7aLo6hh.pgp
Description: OpenPGP digital signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: New OpenStack instance - status

2015-03-09 Thread Miroslav Suchý
On 03/07/2015 07:29 PM, Kevin Fenzi wrote:
 * I see that the tenants have the same internal 172.16.0.0 net right
   now, can we make sure we seperate them from each other? ie, I don't
   want a infrastructure instance being able to talk to a copr builder
   if we can avoid it. 

Are you sure?
From: playbooks/hosts/fed-cloud09.cloud.fedoraproject.org.yml
  # 172.16.0.1/12 -- 172.21.0.1/12 - Free to take
  # 172.23.0.1/12 - free (but used by old cloud)
  # 172.24.0.1/12 - RESERVED it is used internally for OS
  # 172.25.0.1/12 - Cloudintern
  # 172.26.0.1/12 - infrastructure
  # 172.27.0.1/12 - persistent
  # 172.28.0.1/12 - transient
  # 172.29.0.1/12 - scratch
  # 172.30.0.1/12 - copr
  # 172.31.0.1/12 - Free to take
And checking dashboard I see infra in .26 network and copr in .16. Hmm that is 
different one, but copr should have .30.
Playbook seems to be correct. Strange.

 * Do we want to also revisit flavors available? Perhaps drop the
   builder one and just use m1.large for it? we should have resources to
   use more cpus/mem and should make copr builds faster/better. 

80GB is too much, and 4 VCPU too. I think having extra flavor for builder is 
nice as we can change it any time without
affecting other instances/tenants.

 * Is there any way to see how much space is available on the equalogics
   aside from just logging into it via ssh?

Unfortunately no.
I reported it as RFE some time ago.
  https://bugs.launchpad.net/cinder/+bug/1380555
You can only amount of used space using cinder list  cinder show volume-id


-- 
Miroslav Suchy, RHCE, RHCDS
Red Hat, Senior Software Engineer, #brno, #devexp, #fedora-buildsys
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure

Re: New OpenStack instance - status

2015-03-09 Thread Miroslav Suchý
On 03/07/2015 06:59 PM, Kevin Fenzi wrote:
 All thats set and I can see console in the web dash again just fine for
 any of the instances I tried, and they are all https using only. 

Works for me too. Nice. Thanks.

  I tried to automatize adding of SSH keys using this:
 I wonder if we shouldn't have something to update/upload everyones ssh
 keys. Might be handy but of course it's not a blocker/that important. 
 We could even look at just tieing into our existing fedmsg listener
 (when someone with a cloud account changes ssh key, update the cloud). 

Done. Search for upload SSH keys for users action.
However it work only initially. Once user alter his password it will fail.
I ignore those cases with ignore_errors: yes though.
I have pending RFE for OpenStack so admin is able to upload ssh keys to user.

I skipped (commented out) users:
  * twisted
  * cockpit
as I do not know which ssh keys they use. Can somebody put there right values?

  Anyway, I am able (again) to start VM and log to those VM.
 Me too. I uploaded the F22 Alpha cloud image and it worked fine.
 (aside cloud-init taking about 35 seconds to run. It seemed to be
 timing out on some metadata ?)
 
 We should look at hooking our cloud image upload service into this soon
 so we can get images as soon as they are done.

I will leave this one for somebody else.

  My plan for next week is to migrate dev instance to new OpenStack
  (before it will be re-provisioned) and see what needs to be changed.
 Sounds good!
 
 I think: 
 
 * Might be a good time to look at moving copr to f21? and builders also
   to be f21? (they should come up faster and in general be better than
   the el6 ones currently used, IMHO)

I will start by moving builder to F21 (this really limit us) and once it will 
be finished I move backend and fronted.
I'm afraid that by that time I will move them directly to F22 :)

 * Right now ansible on lockbox01 is using euca2ools to manage cloud
   instances, perhaps we could/should just move to nova now? Or this
   could perhaps wait for us to move lockbox01 to rhel7. 

I learned (the hard way) that nova/cider/neutron etc. commands are deprecated. 
The new preferred way is command
openstack from  python-openstackclient. However Icehouse use 0.3 version and 
you should not think about using this
command unless you have 1.0 version available (Juno or Kilo, not sure).
It probably does not matter if you use ansible modules, but you may consider it 
if you are calling commands directly.
#justsaying

-- 
Miroslav Suchy, RHCE, RHCDS
Red Hat, Senior Software Engineer, #brno, #devexp, #fedora-buildsys
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure