Re: [Openstack] [Quantum/Neutron] VM cannot get IP address from DHCP server

2013-07-24 Thread David Kang


  If I remove the following REJECT rules, it works perfectly.
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited

 With them, it looks like that the packets are dropped at the bridge before 
they can be forwarded.
I ran the iptables commands recommended by RedHat.

When I ping 10.12.182.13 from a VM (192.168.3.3), 
I cannot see any packets from qr-32411859-c0,
but I can see packets are dropped at brqf56b3f53-d3.
The outputs of tcpdump is shown below.

$ brctl show
bridge name     bridge id               STP enabled     interfaces
brq69f480ab-06          8000.001e675ba339       no              eth2.82
                                                        tapd8bd73c9-3a
brqf56b3f53-d3          8000.001e675ba338       no              eth1.2001
                                                        tap32411859-c0
                                                        tapfa6a1d01-16
$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.3.0     0.0.0.0         255.255.255.0   U     0      0        0 
ns-fa6a1d01-16
192.168.3.0     0.0.0.0         255.255.255.0   U     0      0        0 
qr-32411859-c0
10.12.182.0     0.0.0.0         255.255.255.0   U     0      0        0 eth2.182
10.12.82.0      0.0.0.0         255.255.255.0   U     0      0        0 
qg-d8bd73c9-3a
0.0.0.0         10.12.82.1      0.0.0.0         UG    0      0        0 
qg-d8bd73c9-3a


$  tcpdump -i qr-32411859-c0 -nn
   // nothing special
 
$ tcpdump -i brqf56b3f53-d3 -nn icmp
tcpdump: WARNING: brqf56b3f53-d3: no IPv4 address assigned
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on brqf56b3f53-d3, link-type EN10MB (Ethernet), capture size 65535 
bytes
13:48:46.892785 IP 192.168.3.3  10.12.182.13: ICMP echo request, id 46605, seq 
1855, length 64
13:48:46.892825 IP 192.168.3.2  192.168.3.3: ICMP host 10.12.182.13 
unreachable - admin prohibited, length 92



- Original Message -
 On 07/23/2013 11:41 PM, David Kang wrote:
 
   A Redhat manual suggests the following rule to enable forwarding
   packets
  among VMs and external network.
  https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/2/pdf/Release_Notes/Red_Hat_OpenStack-2-Release_Notes-en-US.pdf
 
  iptables -t filter -I FORWARD -i qr-+ -o qg-+ -j ACCEPT
  iptables -t filter -I FORWARD -i qg-+ -o qr-+ -j ACCEPT
  iptables -t filter -I FORWARD -i qr-+ -o qr-+ -j ACCEPT
 
   But it doesn't work for me.
 
 Can you elaborate on what it doesn't work means?
 
 Do any of those rules show increased packet/byte counts, indicating
 they've been
 matched?
 
 Is IP forwarding enabled?
 
 Is there a mis-configuration in your bridge config? Use 'brctl show'
 to see
 where all the tap and other devices are attached.
 
 Deleting that one FORWARD rule causing all the trouble is going to be
 a much
 quicker solution.
 
 -Brian
 
  - Original Message -
  On 07/23/2013 12:22 PM, David Kang wrote:
 
   Hi,
 
    We are running OpenStack Folsom on CentOS 6.4.
  Quantum-linuxbridge-agent is used.
  By default, the Quantum node has the following entries in its
  /etc/sysconfig/iptables file.
 
  -A INPUT -j REJECT --reject-with icmp-host-prohibited
  -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 
   With those two lines, VM cannot get IP address from the DHCP
   server
   running on the Quantum node.
  More specifically, the first line prevents a VM from getting IP
  address from DHCP server.
  The second line prevents a VM from talking to other VMs and
  external
  worlds.
  Is there a better way to make the Quantum network work well
  than just commenting them out?
 
  Since Quantum isn't adding them, and you want the system to act as
  a
  DHCP server
  and gateway, I think you have two choices:
 
  1. Delete them
  2. Craft rules to sit above them (using -I) to allow certain
  packets
 
  #2 gets tricky as you'll need to account for DHCP, metadata, etc.
  in
  the INPUT
  chain, and in the FORWARD chain you could maybe start by allowing
  all
  traffic
  from your bridge. You would need to do some more work there.
 
  I believe any DHCP iptables rules will be on the compute hosts, and
  will be put
  in place for anti-spoofing. Since this is the network node you
  won't
  see them here.
 
  -Brian
 

-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum/Neutron] VM cannot get IP address from DHCP server

2013-07-24 Thread David Kang

 Thanks, Brian.
My answers are put in your email with --.

 David

- Original Message -
 On 07/24/2013 10:42 AM, David Kang wrote:
 
If I remove the following REJECT rules, it works perfectly.
  -A INPUT -j REJECT --reject-with icmp-host-prohibited
  -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 
   With them, it looks like that the packets are dropped at the bridge
   before they can be forwarded.
 
 So I'll keep asking - why can't you just remove them? It gets you
 running and
 if you're just kicking the tires it's a valid workaround.
 

-- My sponsor STRONGLY wants to have the rules for security purpose.

  I ran the iptables commands recommended by RedHat.
 
  When I ping 10.12.182.13 from a VM (192.168.3.3),
  I cannot see any packets from qr-32411859-c0,
  but I can see packets are dropped at brqf56b3f53-d3.
  The outputs of tcpdump is shown below.
 
  $ brctl show
  bridge name bridge id STP enabled interfaces
  brq69f480ab-06 8000.001e675ba339 no eth2.82
  tapd8bd73c9-3a
  brqf56b3f53-d3 8000.001e675ba338 no eth1.2001
  tap32411859-c0
  tapfa6a1d01-16
  $ route -n
  Kernel IP routing table
  Destination Gateway Genmask Flags Metric Ref Use Iface
  192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 ns-fa6a1d01-16
  192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-32411859-c0
 
 Overlapping IP ranges? That could be a problem.

-- Those are generated by quantum-linuxbridge-agent.
  If a quantum network is associated to a quantum l3 router, qr-xxx interface 
is added.

 
  10.12.182.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2.182
  10.12.82.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-d8bd73c9-3a
  0.0.0.0 10.12.82.1 0.0.0.0 UG 0 0 0 qg-d8bd73c9-3a
 
 Why is your default route going out this interface and not eth2.182?

-- I didn't show it. Another default router of eth2.182 also exist below 
10.12.82.1.
  Quantum automatically made qg-d8bd73c9-3a as a default router.
  It is the interface to the gateway of the external network where
  floating IP is assigned.

 
  $ tcpdump -i qr-32411859-c0 -nn
 // nothing special
 
 What about ns-fa6a1d01-16? That overlapping IP range looks suspicious.

-- It was made by quantum-linux-bridge.

 
  $ tcpdump -i brqf56b3f53-d3 -nn icmp
  tcpdump: WARNING: brqf56b3f53-d3: no IPv4 address assigned
  tcpdump: verbose output suppressed, use -v or -vv for full protocol
  decode
  listening on brqf56b3f53-d3, link-type EN10MB (Ethernet), capture
  size 65535 bytes
  13:48:46.892785 IP 192.168.3.3  10.12.182.13: ICMP echo request, id
  46605, seq 1855, length 64
  13:48:46.892825 IP 192.168.3.2  192.168.3.3: ICMP host 10.12.182.13
  unreachable - admin prohibited, length 92
 
 This is the reject iptables rule firing, so those other rules are not
 matching.
 You need to look at 'iptables -L -v -n -x' to see if their packet/byte
 counts
 are increasing or not. If not, start using things like 'ip route get
 $dest' to
 figure out what interfaces the kernel is using for output, which will
 help you
 fix those rules to be correct.

-- Thanks, I will try it.

 
 -Brian
 
  - Original Message -
  On 07/23/2013 11:41 PM, David Kang wrote:
 
   A Redhat manual suggests the following rule to enable forwarding
   packets
  among VMs and external network.
  https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/2/pdf/Release_Notes/Red_Hat_OpenStack-2-Release_Notes-en-US.pdf
 
  iptables -t filter -I FORWARD -i qr-+ -o qg-+ -j ACCEPT
  iptables -t filter -I FORWARD -i qg-+ -o qr-+ -j ACCEPT
  iptables -t filter -I FORWARD -i qr-+ -o qr-+ -j ACCEPT
 
   But it doesn't work for me.
 
  Can you elaborate on what it doesn't work means?
 
  Do any of those rules show increased packet/byte counts, indicating
  they've been
  matched?
 
  Is IP forwarding enabled?
 
  Is there a mis-configuration in your bridge config? Use 'brctl
  show'
  to see
  where all the tap and other devices are attached.
 
  Deleting that one FORWARD rule causing all the trouble is going to
  be
  a much
  quicker solution.
 
  -Brian
 
  - Original Message -
  On 07/23/2013 12:22 PM, David Kang wrote:
 
   Hi,
 
We are running OpenStack Folsom on CentOS 6.4.
  Quantum-linuxbridge-agent is used.
  By default, the Quantum node has the following entries in its
  /etc/sysconfig/iptables file.
 
  -A INPUT -j REJECT --reject-with icmp-host-prohibited
  -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 
   With those two lines, VM cannot get IP address from the DHCP
   server
   running on the Quantum node.
  More specifically, the first line prevents a VM from getting IP
  address from DHCP server.
  The second line prevents a VM from talking to other VMs and
  external
  worlds.
  Is there a better way to make the Quantum network work well
  than just commenting them out?
 
  Since Quantum isn't adding them, and you

Re: [Openstack] [Quantum/Neutron] VM cannot get IP address from DHCP server

2013-07-24 Thread David Kang

 It is strange.
The node is only for Quantum-{linuxbridge, dhcp, l3}-agent.
As far as I know, the quantum private network that is not associated with a 
quantum router
has only ns-xxx interface.
The quantum private network otherwise have both ns-xxx and qr-xxx interfaces.

 Thanks,
 David

- Original Message -
 Just some more notes.
 
 It looks like you're running this system as both a network node and
 compute
 node, I think the pdf you found from Redhat assumed the system was a
 dedicated
 network node, i.e. it only had qr- and qg- interfaces, and not ns- as
 created by
 plug() when an instance is booted.
 
 Multiple routes for the same destination, going out two different
 interfaces not
 connected to the same network, are going to cause you trouble. It's
 non-deterministic where a packet will go without ip rules.
 
 I'm going to let you go and debug this some more on your own, as it
 looks like
 it's your iptables config causing it, you just need to get the correct
 rules in
 there.
 
 -Brian
 
 On 07/24/2013 11:34 AM, David Kang wrote:
 
   Thanks, Brian.
  My answers are put in your email with --.
 
   David
 
  - Original Message -
  On 07/24/2013 10:42 AM, David Kang wrote:
 
    If I remove the following REJECT rules, it works perfectly.
  -A INPUT -j REJECT --reject-with icmp-host-prohibited
  -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 
   With them, it looks like that the packets are dropped at the
   bridge
   before they can be forwarded.
 
  So I'll keep asking - why can't you just remove them? It gets you
  running and
  if you're just kicking the tires it's a valid workaround.
 
 
  -- My sponsor STRONGLY wants to have the rules for security
  purpose.
 
  I ran the iptables commands recommended by RedHat.
 
  When I ping 10.12.182.13 from a VM (192.168.3.3),
  I cannot see any packets from qr-32411859-c0,
  but I can see packets are dropped at brqf56b3f53-d3.
  The outputs of tcpdump is shown below.
 
  $ brctl show
  bridge name bridge id STP enabled interfaces
  brq69f480ab-06 8000.001e675ba339 no eth2.82
                                                          tapd8bd73c9-3a
  brqf56b3f53-d3 8000.001e675ba338 no eth1.2001
                                                          tap32411859-c0
                                                          tapfa6a1d01-16
  $ route -n
  Kernel IP routing table
  Destination Gateway Genmask Flags Metric Ref Use Iface
  192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 ns-fa6a1d01-16
  192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-32411859-c0
 
  Overlapping IP ranges? That could be a problem.
 
  -- Those are generated by quantum-linuxbridge-agent.
    If a quantum network is associated to a quantum l3 router, qr-xxx
    interface is added.
 
 
  10.12.182.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2.182
  10.12.82.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-d8bd73c9-3a
  0.0.0.0 10.12.82.1 0.0.0.0 UG 0 0 0 qg-d8bd73c9-3a
 
  Why is your default route going out this interface and not
  eth2.182?
 
  -- I didn't show it. Another default router of eth2.182 also exist
  below 10.12.82.1.
    Quantum automatically made qg-d8bd73c9-3a as a default router.
    It is the interface to the gateway of the external network where
    floating IP is assigned.
 
 
  $ tcpdump -i qr-32411859-c0 -nn
     // nothing special
 
  What about ns-fa6a1d01-16? That overlapping IP range looks
  suspicious.
 
  -- It was made by quantum-linux-bridge.
 
 
  $ tcpdump -i brqf56b3f53-d3 -nn icmp
  tcpdump: WARNING: brqf56b3f53-d3: no IPv4 address assigned
  tcpdump: verbose output suppressed, use -v or -vv for full
  protocol
  decode
  listening on brqf56b3f53-d3, link-type EN10MB (Ethernet), capture
  size 65535 bytes
  13:48:46.892785 IP 192.168.3.3  10.12.182.13: ICMP echo request,
  id
  46605, seq 1855, length 64
  13:48:46.892825 IP 192.168.3.2  192.168.3.3: ICMP host
  10.12.182.13
  unreachable - admin prohibited, length 92
 
  This is the reject iptables rule firing, so those other rules are
  not
  matching.
  You need to look at 'iptables -L -v -n -x' to see if their
  packet/byte
  counts
  are increasing or not. If not, start using things like 'ip route
  get
  $dest' to
  figure out what interfaces the kernel is using for output, which
  will
  help you
  fix those rules to be correct.
 
  -- Thanks, I will try it.
 
 
  -Brian
 
  - Original Message -
  On 07/23/2013 11:41 PM, David Kang wrote:
 
   A Redhat manual suggests the following rule to enable
   forwarding
   packets
  among VMs and external network.
  https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/2/pdf/Release_Notes/Red_Hat_OpenStack-2-Release_Notes-en-US.pdf
 
  iptables -t filter -I FORWARD -i qr-+ -o qg-+ -j ACCEPT
  iptables -t filter -I FORWARD -i qg-+ -o qr-+ -j ACCEPT
  iptables -t filter -I FORWARD -i qr-+ -o qr-+ -j ACCEPT
 
   But it doesn't work for me.
 
  Can you elaborate on what it doesn't work means?
 
  Do any of those rules show increased

[Openstack] [Quantum] routing does not work as expected after quantum-linux-bridge sets up default route

2013-07-23 Thread David Kang

 Hi,

 We are running OpenStack Folsom on CentOS 6.4.
Quantum-linuxbridge-agent is used.
Its external network is configured to use VLAN 83 and 
its address range is 10.12.83.0/24.
The IP of the Quantum node is 10.12.183.11/24.

 The problem is that after reboot, the quantum node cannot
connect to DNS servers (10.12.81.39) for a while (sometime tens of minutes).
After boot-up but before the quantum-linuxbridge-agent starts, the default gw 
is:

default         10.12.183.1     0.0.0.0         UG    0      0        0 eth2.183

With the gw, Quantum can get proper data from MySQL server 
and RabbitMQ server using DNS server.

Quantum sets up a bridge on eth2.83 and make it a default gateway (10.12.83.1).
Now, there are two gateways:

default         10.12.83.1      0.0.0.0         UG    0      0        0 
qg-49ff6d2f-a7
default         10.12.183.1     0.0.0.0         UG    0      0        0 eth2.183

 The physical router 10.12.83.1 is configured to route for DNS servers.
But even with the updated routing table, when I do 'traceroute' DNS server,
it routes through 10.12.183.1.
And it cannot reach DNS server.
When I remove 10.12.183.1 entry from the routing table, it works fine.
But, I think it should work as it is because 10.12.183.1 is lower than 
10.12.83.1 in
the routing table.

 What could be wrong?
I will appreciate any help.

 Thanks,
 David


-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum/Neutron] VM cannot get IP address from DHCP server

2013-07-23 Thread David Kang

 Hi,

  We are running OpenStack Folsom on CentOS 6.4.
Quantum-linuxbridge-agent is used.
By default, the Quantum node has the following entries in its 
/etc/sysconfig/iptables file.

-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited

 With those two lines, VM cannot get IP address from the DHCP server running on 
the Quantum node.
More specifically, the first line prevents a VM from getting IP address from 
DHCP server.
The second line prevents a VM from talking to other VMs and external worlds.
Is there a better way to make the Quantum network work well
than just commenting them out?

 I'll appreciate your help.

 David

-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum/Neutron] VM cannot get IP address from DHCP server

2013-07-23 Thread David Kang

 Thank you for your suggestion.

 We are using Quantum/Neutron not nova-network.
So, we don't use br100.
(I believe you are using nova-network.)

 And the firewall rules that cause problem reside on the Quantum node
not on the nova-compute node.
I cannot find any rule for --dport 67 on my Quantum node.
I used service iptables status command to check the firewall rules.

 Thanks,
 David


- Original Message -
 Hi,
 
 Please can you look up in the iptables?
 Normally on a working openstack host the packets comming in the filter
 table in the input chain are directed to the nova-network-INPUT which
 has a rule to accept dhcp packets.
 On my setup is something like:
 -A INPUT -j nova-network-INPUT
 
 .
 .
 .
 -A nova-network-INPUT -i br100 -p udp -m udp --dport 67 -j ACCEPT
 
 
 So I think you have to look somewhere else for your issue.
 
 
 Regards,
 Gabriel
 
 
 
 
 
 
 From: David Kang dk...@isi.edu
 To: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
 openstack@lists.launchpad.net
 Sent: Tuesday, July 23, 2013 7:22 PM
 Subject: [Openstack] [Quantum/Neutron] VM cannot get IP address from
 DHCP server
 
 
 
 Hi,
 
 We are running OpenStack Folsom on CentOS 6.4.
 Quantum-linuxbridge-agent is used.
 By default, the Quantum node has the following entries in its
 /etc/sysconfig/iptables file.
 
 -A INPUT -j REJECT --reject-with icmp-host-prohibited
 -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 
 With those two lines, VM cannot get IP address from the DHCP server
 running on the Quantum node.
 More specifically, the first line prevents a VM from getting IP
 address from DHCP server.
 The second line prevents a VM from talking to other VMs and external
 worlds.
 Is there a better way to make the Quantum network work well
 than just commenting them out?
 
 I'll appreciate your help.
 
 David
 
 --
 --
 Dr. Dong-In David Kang
 Computer Scientist
 USC/ISI
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp

-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum/Neutron] VM cannot get IP address from DHCP server

2013-07-23 Thread David Kang

 We use CentOS 6.4, which does not support network namespace.
So ip netns .. fails.

 Thanks,
 David

- Original Message -
 that will not show the rules for the instance. try this
 ip netns exec yourrouter-uuid iptables -nxvL
 
 
 On Jul 23, 2013, at 09:59 , David Kang dk...@isi.edu wrote:
 
 
  Thank you for your suggestion.
 
  We are using Quantum/Neutron not nova-network.
  So, we don't use br100.
  (I believe you are using nova-network.)
 
  And the firewall rules that cause problem reside on the Quantum node
  not on the nova-compute node.
  I cannot find any rule for --dport 67 on my Quantum node.
  I used service iptables status command to check the firewall
  rules.
 
  Thanks,
  David
 
 
  - Original Message -
  Hi,
 
  Please can you look up in the iptables?
  Normally on a working openstack host the packets comming in the
  filter
  table in the input chain are directed to the nova-network-INPUT
  which
  has a rule to accept dhcp packets.
  On my setup is something like:
  -A INPUT -j nova-network-INPUT
 
  .
  .
  .
  -A nova-network-INPUT -i br100 -p udp -m udp --dport 67 -j ACCEPT
 
 
  So I think you have to look somewhere else for your issue.
 
 
  Regards,
  Gabriel
 
 
 
 
 
 
  From: David Kang dk...@isi.edu
  To: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
  openstack@lists.launchpad.net
  Sent: Tuesday, July 23, 2013 7:22 PM
  Subject: [Openstack] [Quantum/Neutron] VM cannot get IP address
  from
  DHCP server
 
 
 
  Hi,
 
  We are running OpenStack Folsom on CentOS 6.4.
  Quantum-linuxbridge-agent is used.
  By default, the Quantum node has the following entries in its
  /etc/sysconfig/iptables file.
 
  -A INPUT -j REJECT --reject-with icmp-host-prohibited
  -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 
  With those two lines, VM cannot get IP address from the DHCP server
  running on the Quantum node.
  More specifically, the first line prevents a VM from getting IP
  address from DHCP server.
  The second line prevents a VM from talking to other VMs and
  external
  worlds.
  Is there a better way to make the Quantum network work well
  than just commenting them out?
 
  I'll appreciate your help.
 
  David
 
  --
  --
  Dr. Dong-In David Kang
  Computer Scientist
  USC/ISI
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help : https://help.launchpad.net/ListHelp
 
  --
  --
  Dr. Dong-In David Kang
  Computer Scientist
  USC/ISI
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help : https://help.launchpad.net/ListHelp
 
  !DSPAM:2,51eeb6bc294852088044995!
 

-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum/Neutron] VM cannot get IP address from DHCP server

2013-07-23 Thread David Kang

 
 What I have observed so far is...

1. nova-compute sends dhcp request
2. dhcp-server running on the Quantum node does not receive the request
 because of the firewall setting.
 I don't understand why quantum-dhcp-agent does not set up firewall properly.
 (Yes, all the openstack components are running on CentOS6.4 in our system.)

 Thanks,
 David

- Original Message -
 Hi,
 
 
 This is very interesting..:)
 I am using openstack grizzly allinone with quantum/neutron.
 
 
 Look what I am observing.
 -before starting an instance on the server
 root@ubuntu1204:~# iptables-save -t filter
 # Generated by iptables-save v1.4.12 on Tue Jul 23 20:22:55 2013
 *filter
 :INPUT ACCEPT [62981:17142030]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [62806:17138989]
 :nova-api-FORWARD - [0:0]
 :nova-api-INPUT - [0:0]
 :nova-api-OUTPUT - [0:0]
 :nova-api-local - [0:0]
 :nova-filter-top - [0:0]
 -A INPUT -j nova-api-INPUT
 -A INPUT -p gre -j ACCEPT
 -A FORWARD -j nova-filter-top
 -A FORWARD -j nova-api-FORWARD
 -A OUTPUT -j nova-filter-top
 -A OUTPUT -j nova-api-OUTPUT
 -A nova-api-INPUT -d 10.200.10.10/32 -p tcp -m tcp --dport 8775 -j
 ACCEPT
 -A nova-filter-top -j nova-api-local
 COMMIT
 # Completed on Tue Jul 23 20:22:55 2013
 root@ubuntu1204:~#
 
 
 -after starting an instance on the host
 
 root@ubuntu1204:~# iptables-save -t filter
 # Generated by iptables-save v1.4.12 on Tue Jul 23 20:24:42 2013
 *filter
 :INPUT ACCEPT [90680:24989889]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [90482:24984752]
 :nova-api-FORWARD - [0:0]
 :nova-api-INPUT - [0:0]
 :nova-api-OUTPUT - [0:0]
 :nova-api-local - [0:0]
 :nova-compute-FORWARD - [0:0]
 :nova-compute-INPUT - [0:0]
 :nova-compute-OUTPUT - [0:0]
 :nova-compute-inst-35 - [0:0]
 :nova-compute-local - [0:0]
 :nova-compute-provider - [0:0]
 :nova-compute-sg-fallback - [0:0]
 :nova-filter-top - [0:0]
 -A INPUT -j nova-compute-INPUT
 -A INPUT -j nova-api-INPUT
 -A INPUT -p gre -j ACCEPT
 -A FORWARD -j nova-filter-top
 -A FORWARD -j nova-compute-FORWARD
 -A FORWARD -j nova-api-FORWARD
 -A OUTPUT -j nova-filter-top
 -A OUTPUT -j nova-compute-OUTPUT
 -A OUTPUT -j nova-api-OUTPUT
 -A nova-api-INPUT -d 10.200.10.10/32 -p tcp -m tcp --dport 8775 -j
 ACCEPT
 -A nova-compute-FORWARD -s 0.0.0.0/32 -d 255.255.255.255/32 -p udp -m
 udp --sport 68 --dport 67 -j ACCEPT
 -A nova-compute-INPUT -s 0.0.0.0/32 -d 255.255.255.255/32 -p udp -m
 udp --sport 68 --dport 67 -j ACCEPT
 -A nova-compute-inst-35 -m state --state INVALID -j DROP
 -A nova-compute-inst-35 -m state --state RELATED,ESTABLISHED -j ACCEPT
 -A nova-compute-inst-35 -j nova-compute-provider
 -A nova-compute-inst-35 -s 172.24.17.2/32 -p udp -m udp --sport 67
 --dport 68 -j ACCEPT
 -A nova-compute-inst-35 -s 172.24.17.0/24 -j ACCEPT
 -A nova-compute-inst-35 -p tcp -m tcp --dport 22 -j ACCEPT
 -A nova-compute-inst-35 -p icmp -j ACCEPT
 -A nova-compute-inst-35 -j nova-compute-sg-fallback
 -A nova-compute-local -d 172.24.17.1/32 -j nova-compute-inst-35
 -A nova-compute-sg-fallback -j DROP
 -A nova-filter-top -j nova-compute-local
 -A nova-filter-top -j nova-api-local
 COMMIT
 # Completed on Tue Jul 23 20:24:42 2013
 
 
 
 
 It seams that the rule that accepts dhcp packets is created once an
 instance is spawned.
 
 
 I will try the same thing on an centos64.
 
 
 Regards,
 Gabriel
 
 
 
 
 
 From: David Kang dk...@isi.edu
 To: Staicu Gabriel gabriel_sta...@yahoo.com
 Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
 openstack@lists.launchpad.net
 Sent: Tuesday, July 23, 2013 7:59 PM
 Subject: Re: [Openstack] [Quantum/Neutron] VM cannot get IP address
 from DHCP server
 
 
 
 Thank you for your suggestion.
 
 We are using Quantum/Neutron not nova-network.
 So, we don't use br100.
 (I believe you are using nova-network.)
 
 And the firewall rules that cause problem reside on the Quantum node
 not on the nova-compute node.
 I cannot find any rule for --dport 67 on my Quantum node.
 I used service iptables status command to check the firewall rules.
 
 Thanks,
 David
 
 
 - Original Message -
  Hi,
 
  Please can you look up in the iptables?
  Normally on a working openstack host the packets comming in the
  filter
  table in the input chain are directed to the nova-network-INPUT
  which
  has a rule to accept dhcp packets.
  On my setup is something like:
  -A INPUT -j nova-network-INPUT
 
  .
  .
  .
  -A nova-network-INPUT -i br100 -p udp -m udp --dport 67 -j ACCEPT
 
 
  So I think you have to look somewhere else for your issue.
 
 
  Regards,
  Gabriel
 
 
 
 
 
 
  From: David Kang  dk...@isi.edu 
  To:  openstack@lists.launchpad.net ( openstack@lists.launchpad.net
  )
   openstack@lists.launchpad.net 
  Sent: Tuesday, July 23, 2013 7:22 PM
  Subject: [Openstack] [Quantum/Neutron] VM cannot get IP address from
  DHCP server
 
 
 
  Hi,
 
  We are running OpenStack Folsom on CentOS 6.4.
  Quantum-linuxbridge-agent is used.
  By default, the Quantum node has the following entries in its
  /etc/sysconfig/iptables file

Re: [Openstack] [Quantum/Neutron] VM cannot get IP address from DHCP server

2013-07-23 Thread David Kang

 I think I found the solution.

https://bugzilla.redhat.com/show_bug.cgi?id=889868

 It was reported as a bug by RedHat.
It also suggests a work-around.

 Thank you everyone.

 David

- Original Message -
 What I have observed so far is...
 
 1. nova-compute sends dhcp request
 2. dhcp-server running on the Quantum node does not receive the
 request
 because of the firewall setting.
 I don't understand why quantum-dhcp-agent does not set up firewall
 properly.
 (Yes, all the openstack components are running on CentOS6.4 in our
 system.)
 
 Thanks,
 David
 
 - Original Message -
  Hi,
 
 
  This is very interesting..:)
  I am using openstack grizzly allinone with quantum/neutron.
 
 
  Look what I am observing.
  -before starting an instance on the server
  root@ubuntu1204:~# iptables-save -t filter
  # Generated by iptables-save v1.4.12 on Tue Jul 23 20:22:55 2013
  *filter
  :INPUT ACCEPT [62981:17142030]
  :FORWARD ACCEPT [0:0]
  :OUTPUT ACCEPT [62806:17138989]
  :nova-api-FORWARD - [0:0]
  :nova-api-INPUT - [0:0]
  :nova-api-OUTPUT - [0:0]
  :nova-api-local - [0:0]
  :nova-filter-top - [0:0]
  -A INPUT -j nova-api-INPUT
  -A INPUT -p gre -j ACCEPT
  -A FORWARD -j nova-filter-top
  -A FORWARD -j nova-api-FORWARD
  -A OUTPUT -j nova-filter-top
  -A OUTPUT -j nova-api-OUTPUT
  -A nova-api-INPUT -d 10.200.10.10/32 -p tcp -m tcp --dport 8775 -j
  ACCEPT
  -A nova-filter-top -j nova-api-local
  COMMIT
  # Completed on Tue Jul 23 20:22:55 2013
  root@ubuntu1204:~#
 
 
  -after starting an instance on the host
 
  root@ubuntu1204:~# iptables-save -t filter
  # Generated by iptables-save v1.4.12 on Tue Jul 23 20:24:42 2013
  *filter
  :INPUT ACCEPT [90680:24989889]
  :FORWARD ACCEPT [0:0]
  :OUTPUT ACCEPT [90482:24984752]
  :nova-api-FORWARD - [0:0]
  :nova-api-INPUT - [0:0]
  :nova-api-OUTPUT - [0:0]
  :nova-api-local - [0:0]
  :nova-compute-FORWARD - [0:0]
  :nova-compute-INPUT - [0:0]
  :nova-compute-OUTPUT - [0:0]
  :nova-compute-inst-35 - [0:0]
  :nova-compute-local - [0:0]
  :nova-compute-provider - [0:0]
  :nova-compute-sg-fallback - [0:0]
  :nova-filter-top - [0:0]
  -A INPUT -j nova-compute-INPUT
  -A INPUT -j nova-api-INPUT
  -A INPUT -p gre -j ACCEPT
  -A FORWARD -j nova-filter-top
  -A FORWARD -j nova-compute-FORWARD
  -A FORWARD -j nova-api-FORWARD
  -A OUTPUT -j nova-filter-top
  -A OUTPUT -j nova-compute-OUTPUT
  -A OUTPUT -j nova-api-OUTPUT
  -A nova-api-INPUT -d 10.200.10.10/32 -p tcp -m tcp --dport 8775 -j
  ACCEPT
  -A nova-compute-FORWARD -s 0.0.0.0/32 -d 255.255.255.255/32 -p udp
  -m
  udp --sport 68 --dport 67 -j ACCEPT
  -A nova-compute-INPUT -s 0.0.0.0/32 -d 255.255.255.255/32 -p udp -m
  udp --sport 68 --dport 67 -j ACCEPT
  -A nova-compute-inst-35 -m state --state INVALID -j DROP
  -A nova-compute-inst-35 -m state --state RELATED,ESTABLISHED -j
  ACCEPT
  -A nova-compute-inst-35 -j nova-compute-provider
  -A nova-compute-inst-35 -s 172.24.17.2/32 -p udp -m udp --sport 67
  --dport 68 -j ACCEPT
  -A nova-compute-inst-35 -s 172.24.17.0/24 -j ACCEPT
  -A nova-compute-inst-35 -p tcp -m tcp --dport 22 -j ACCEPT
  -A nova-compute-inst-35 -p icmp -j ACCEPT
  -A nova-compute-inst-35 -j nova-compute-sg-fallback
  -A nova-compute-local -d 172.24.17.1/32 -j nova-compute-inst-35
  -A nova-compute-sg-fallback -j DROP
  -A nova-filter-top -j nova-compute-local
  -A nova-filter-top -j nova-api-local
  COMMIT
  # Completed on Tue Jul 23 20:24:42 2013
 
 
 
 
  It seams that the rule that accepts dhcp packets is created once an
  instance is spawned.
 
 
  I will try the same thing on an centos64.
 
 
  Regards,
  Gabriel
 
 
 
 
 
  From: David Kang dk...@isi.edu
  To: Staicu Gabriel gabriel_sta...@yahoo.com
  Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
  openstack@lists.launchpad.net
  Sent: Tuesday, July 23, 2013 7:59 PM
  Subject: Re: [Openstack] [Quantum/Neutron] VM cannot get IP address
  from DHCP server
 
 
 
  Thank you for your suggestion.
 
  We are using Quantum/Neutron not nova-network.
  So, we don't use br100.
  (I believe you are using nova-network.)
 
  And the firewall rules that cause problem reside on the Quantum node
  not on the nova-compute node.
  I cannot find any rule for --dport 67 on my Quantum node.
  I used service iptables status command to check the firewall
  rules.
 
  Thanks,
  David
 
 
  - Original Message -
   Hi,
  
   Please can you look up in the iptables?
   Normally on a working openstack host the packets comming in the
   filter
   table in the input chain are directed to the nova-network-INPUT
   which
   has a rule to accept dhcp packets.
   On my setup is something like:
   -A INPUT -j nova-network-INPUT
  
   .
   .
   .
   -A nova-network-INPUT -i br100 -p udp -m udp --dport 67 -j ACCEPT
  
  
   So I think you have to look somewhere else for your issue.
  
  
   Regards,
   Gabriel
  
  
  
  
  
  
   From: David Kang  dk...@isi.edu 
   To:  openstack@lists.launchpad.net (
   openstack@lists.launchpad.net

Re: [Openstack] [Quantum/Neutron] VM cannot get IP address from DHCP server

2013-07-23 Thread David Kang

 Thank you, Brian.

 David

- Original Message -
 On 07/23/2013 12:22 PM, David Kang wrote:
 
   Hi,
 
We are running OpenStack Folsom on CentOS 6.4.
  Quantum-linuxbridge-agent is used.
  By default, the Quantum node has the following entries in its
  /etc/sysconfig/iptables file.
 
  -A INPUT -j REJECT --reject-with icmp-host-prohibited
  -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 
   With those two lines, VM cannot get IP address from the DHCP server
   running on the Quantum node.
  More specifically, the first line prevents a VM from getting IP
  address from DHCP server.
  The second line prevents a VM from talking to other VMs and external
  worlds.
  Is there a better way to make the Quantum network work well
  than just commenting them out?
 
 Since Quantum isn't adding them, and you want the system to act as a
 DHCP server
 and gateway, I think you have two choices:
 
 1. Delete them
 2. Craft rules to sit above them (using -I) to allow certain packets
 
 #2 gets tricky as you'll need to account for DHCP, metadata, etc. in
 the INPUT
 chain, and in the FORWARD chain you could maybe start by allowing all
 traffic
 from your bridge. You would need to do some more work there.
 
 I believe any DHCP iptables rules will be on the compute hosts, and
 will be put
 in place for anti-spoofing. Since this is the network node you won't
 see them here.
 
 -Brian

-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum/Neutron] VM cannot get IP address from DHCP server

2013-07-23 Thread David Kang

 A Redhat manual suggests the following rule to enable forwarding packets
among VMs and external network.
https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/2/pdf/Release_Notes/Red_Hat_OpenStack-2-Release_Notes-en-US.pdf

iptables -t filter -I FORWARD -i qr-+ -o qg-+ -j ACCEPT
iptables -t filter -I FORWARD -i qg-+ -o qr-+ -j ACCEPT
iptables -t filter -I FORWARD -i qr-+ -o qr-+ -j ACCEPT

 But it doesn't work for me.
Any suggestion?

 Thanks,
 David

- Original Message -
 On 07/23/2013 12:22 PM, David Kang wrote:
 
   Hi,
 
We are running OpenStack Folsom on CentOS 6.4.
  Quantum-linuxbridge-agent is used.
  By default, the Quantum node has the following entries in its
  /etc/sysconfig/iptables file.
 
  -A INPUT -j REJECT --reject-with icmp-host-prohibited
  -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 
   With those two lines, VM cannot get IP address from the DHCP server
   running on the Quantum node.
  More specifically, the first line prevents a VM from getting IP
  address from DHCP server.
  The second line prevents a VM from talking to other VMs and external
  worlds.
  Is there a better way to make the Quantum network work well
  than just commenting them out?
 
 Since Quantum isn't adding them, and you want the system to act as a
 DHCP server
 and gateway, I think you have two choices:
 
 1. Delete them
 2. Craft rules to sit above them (using -I) to allow certain packets
 
 #2 gets tricky as you'll need to account for DHCP, metadata, etc. in
 the INPUT
 chain, and in the FORWARD chain you could maybe start by allowing all
 traffic
 from your bridge. You would need to do some more work there.
 
 I believe any DHCP iptables rules will be on the compute hosts, and
 will be put
 in place for anti-spoofing. Since this is the network node you won't
 see them here.
 
 -Brian

-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Webinar: Virtualizing your scale out applications on OpenStack

2013-07-16 Thread David Scannell
Hi All,

Gridcentric is giving a webinar this Friday, July 19, 2013 at 1:00 PM
EDT about scaling out your applications on OpenStack. We will cover the
important issues, such as scale, performance and cost, that need to be
considered when virtualizing your applications. We will also introduce our
Virtual Memory Streaming (vms) technology and how we integrate with
OpenStack to address many of these issues.

 Please register for the webinar here: goo.gl/QQTVW

 I hope that some of you are interested and will be able to attend.

Thanks,
David

website: www.gridcentric.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] HTTP headers are incorrectly treated case sensitive by jClouds causing OpenStack x-storage-url to fail

2013-06-28 Thread David Hadas
Ali hi,

On my system I get the headers as  X-Storage-Url when running under Apache2
front end (not lowercase).

Btw, I am always interested to learn how people are using Swift with the
Apache front end as this is a fairly recent addition (we are working not to
get it into devstack), can you describe shortly your setup and the reason
behind choosing Apache front end?

DH


Regards,
David Hadas,
Openstack Swift ATC, Architect, Master Inventor
IBM Research Labs, Haifa
Tel:Int+972-4-829-6104
Fax:   Int+972-4-829-6112




From:   Ali, Saqib docbook@gmail.com
To: Chmouel Boudjnah chmo...@enovance.com,
Cc: openstack@lists.launchpad.net
Date:   28/06/2013 04:30 PM
Subject:Re: [Openstack] HTTP headers are incorrectly treated case
sensitive by jClouds causing OpenStack x-storage-url to fail
Sent by:Openstack openstack-bounces
+davidh=il.ibm@lists.launchpad.net



Chmouel,

Not really a hack on the swift, just the apache web frontend[1]

1. http://docs.openstack.org/developer/swift/apache_deployment_guide.html


On Fri, Jun 28, 2013 at 6:26 AM, Chmouel Boudjnah chmo...@enovance.com
wrote:
  On Fri, Jun 28, 2013 at 2:00 AM, Ali, Saqib docbook@gmail.com
  wrote:
   Is there anything we can do to work around this, while someone from the
   jClouds community fixes this issue?


  I would be believe a jclouds fix would be faster to get in than to try
  agree on a hack to do on swift.

  Chmouel.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Re-balancing of RING does not level up my objects in old devices

2013-06-27 Thread David Hadas
You need the daemons running 24/7 not only when you change the ring :)
Specifically the replicators must be running to get the objects to their
eventual place (which may have changed by the ring rebalancing you had
performed).

(You also need to copy the ring to all you servers)

DH


Regards,
David Hadas,
Openstack Swift ATC, Architect, Master Inventor
IBM Research Labs, Haifa
Tel:Int+972-4-829-6104
Fax:   Int+972-4-829-6112




From:   Vengurlekar, Tushar V (USD-SWD) tushar.vengurle...@hp.com
To: openstack@lists.launchpad.net
openstack@lists.launchpad.net,
Date:   27/06/2013 05:43 PM
Subject:[Openstack] Re-balancing of RING does not level up my objects
in  old devices
Sent by:Openstack openstack-bounces
+davidh=il.ibm@lists.launchpad.net



Hello,

I have a question related to balancing swift objects across devices added
to the ring.

I have Swift setup which initially created with 2 devices in it (weight
100) and partition power 12.
These devices now started filling up to its capacity (~90% of its size).
So now I added 2 new devices (weight 100) to the ring and performed
rebalance.
The ring shows me all new configurations with correct partitions mapped to
each device.

But even after running ring rebalance command I do not see my objects are
balanced (level up) to the newer devices.
(i.e. older devices still consume 90% of space, while new devices are
empty) So do I need to perform any additional action (run object updater
once) to achieve this?

Thanks,
Tushar



-Original Message-
From: boun...@canonical.com [mailto:boun...@canonical.com] On Behalf Of
Tushar
Sent: Thursday, June 27, 2013 6:26 PM
To: Vengurlekar, Tushar V (USD-SWD)
Subject: [Question #231472]: Re-balancing of RING does not level up my
objects in old devices

New question #231472 on OpenStack Object Storage (swift):
https://answers.launchpad.net/swift/+question/231472
--
You received this question notification because you asked the question.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Re-balancing of RING does not level up my objects in old devices

2013-06-27 Thread David Hadas
Ok so you have a single replica indicated by the ring.

Beyond other potential issues with a single replica configuration (e.g.
intolerance to faults and not being able to access data after ring
balancing as will be described below), I dont think there is much
experience with such a setup, maybe others can say something about how ring
rebalancing would work when there is a single replica defined - but even if
the new ring is well formed, you do need to activate the replicators to get
the data moving to its new location - further, until data is moved, you
will have no ability to access any of the partitions that were moved by the
ring (which is due to your use of a single replica  - not a recommended
configuration for swift).

rebabalncing a ring does not move data. Think of the ring as a map - once
you change parts of the map (i.e. perform rebalancing), it does not mean
that any data had moved - instead it means that you no longer can reach
some of the data because it is not according to the map...  - the
replicator is the one moving the data around to fit the new map. After it
is done, you will be able to reach all data as before.   Not being able to
access the dtaa is due to the fact that you have only one replica. If you
had 3, and you could not reach one, you could still reach the other two
(and the ring makes sure not to move more than one replica at  a time...)

DH


Regards,
David Hadas,
Openstack Swift ATC, Architect, Master Inventor
IBM Research Labs, Haifa
Tel:Int+972-4-829-6104
Fax:   Int+972-4-829-6112




From:   Vengurlekar, Tushar V (USD-SWD) tushar.vengurle...@hp.com
To: David Hadas/Haifa/IBM@IBMIL,
Cc: openstack@lists.launchpad.net
openstack@lists.launchpad.net, Openstack
openstack-bounces+davidh=il.ibm@lists.launchpad.net
Date:   27/06/2013 06:40 PM
Subject:RE: [Openstack] Re-balancing of RING does not level up my
objects in  old devices



David,
Thanks for the quick reply.

My configuration is little different. The replication level is set to 1
(single copy). The ring database is stored in common share (so I need not
copy it to all servers)
So do you mean the replicator service is responsible for object movement?
(I have replicator service not running in my setup)
Does rebalancing of Ring DB will move my objects too in the devices? Or it
will just mark the partitions to the new devices?

Regards,
Tushar

-Original Message-
From: David Hadas [mailto:dav...@il.ibm.com]
Sent: Thursday, June 27, 2013 8:38 PM
To: Vengurlekar, Tushar V (USD-SWD)
Cc: openstack@lists.launchpad.net; Openstack
Subject: Re: [Openstack] Re-balancing of RING does not level up my objects
in old devices

You need the daemons running 24/7 not only when you change the ring :)
Specifically the replicators must be running to get the objects to their
eventual place (which may have changed by the ring rebalancing you had
performed).

(You also need to copy the ring to all you servers)

DH


Regards,
David Hadas,
Openstack Swift ATC, Architect, Master Inventor IBM Research Labs, Haifa
Tel:Int+972-4-829-6104
Fax:   Int+972-4-829-6112




From:Vengurlekar, Tushar V (USD-SWD) tushar.vengurle...@hp.com
To:  openstack@lists.launchpad.net
openstack@lists.launchpad.net,
Date:27/06/2013 05:43 PM
Subject: [Openstack] Re-balancing of RING does not level up my
objects
in   old devices
Sent by: Openstack openstack-bounces
+davidh=il.ibm@lists.launchpad.net



Hello,

I have a question related to balancing swift objects across devices added
to the ring.

I have Swift setup which initially created with 2 devices in it (weight
100) and partition power 12.
These devices now started filling up to its capacity (~90% of its size).
So now I added 2 new devices (weight 100) to the ring and performed
rebalance.
The ring shows me all new configurations with correct partitions mapped to
each device.

But even after running ring rebalance command I do not see my objects are
balanced (level up) to the newer devices.
(i.e. older devices still consume 90% of space, while new devices are
empty) So do I need to perform any additional action (run object updater
once) to achieve this?

Thanks,
Tushar



-Original Message-
From: boun...@canonical.com [mailto:boun...@canonical.com] On Behalf Of
Tushar
Sent: Thursday, June 27, 2013 6:26 PM
To: Vengurlekar, Tushar V (USD-SWD)
Subject: [Question #231472]: Re-balancing of RING does not level up my
objects in old devices

New question #231472 on OpenStack Object Storage (swift):
https://answers.launchpad.net/swift/+question/231472
--
You received this question notification because you asked the question.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https

Re: [Openstack] [Keystone] Splitting the Identity Backend

2013-05-21 Thread David Chadwick

Hi Adam

I would propose splitting the backend into two conceptually distinct 
types of attributes, and then each of these high level types can be 
arbitrary split into different databases depending upon their sources of 
authority and who administers them. Your proposal would be just one 
specialisation of this more general model.


The high level distinction I would make is between (read only) identity 
attributes and (writable) authorisation attributes. The latter are the 
ones used by the OpenStack services for making access control decisions, 
whilst the former are never used or seen by the OpenStack services, but 
are used by organisations to identify and group users into different 
sets. So HR databases and LDAP servers typically store these identity 
attributes.


An attribute mapping function is needed to map between the former and 
the latter.


We can then organise the user login function as follows:

1. A user logs in and is identified and authenticated, and a set of 
identity attributes are assigned to him by the authentication function. 
This could be from a read only LDAP service, or by a federated IDP. It 
should be pluggable and installation dependent. It could even  be done 
by the user presenting an X.509 certificate and the information 
extracted from it. This part of Keystone should be highly flexible and 
adaptable to suit different deployment models.


2. The attribute mapping function maps from his identity attributes to 
his authz attributes. This can be a null mapping function if needed e.g. 
if the read only backend LDAP happens to store the users OpenStack 
projects and roles. But in most cases it will not be null. The mappings 
are set up by the Keystone administrator.


3. The users authz attributes are stored in his Keystone entry, which 
must be a writeable database owned by Keystone. Each time the user 
logins, his authz attributes will be updated to match his current 
identity attributes. So if an organisation promotes an employee, and 
changes his LDAP attributes, this could have the effect of automatically 
escalating his rights in Openstack. Conversely, if an employee is 
demoted, his rights in OpenStack could be automatically downgraded. It 
would all depend upon what the mapping rules were ie. whether they were 
fixed to a user's login ID (in which case his authz attributes would not 
change) or whether they depended upon his roles in his organisation (in 
which case they would automatically change).


4. The token is created based on his authz attributes as now, and 
everything continues as now.


So taking the current mix of identity attributes that you identify 
below, they would be split as follows


Domains, Roles, and Projects would be stored in Keystone's writeable 
database (as they are authz attributes)
Groups and User Names (and Passwords) would be stored in the read only 
identity databases.

Role assignments would be done by the attribute mapping function.

If you want to split Domains into their own separate Keystone database, 
this fine, it does not effect the overall model. So, your proposal fits 
into this high level model, but this high level model provides much more 
flexibility to implementers and will allow for future expansion


regards

David

On 20/05/2013 17:46, Adam Young wrote:

Currently, the Identity backend  has Domains, Users , Groups, Roles,
Role Assignments and Projects.  I've proposed splitting it into 3
distinct pieces.  Domain, Identity, and Projects.

Here is the rationale:

Somewhere between a third and a half of the OpenStack deployments are
using LDAP.  However, the mapping from LDAP to Identity does not work.
LDAP is almost always a read only  datasource.   While Keystone *can*
manage these, it should also be possible to treat the users and groups
piece as externally managed.

In addition, several organizations have multiple LDAP servers. Not a
huge number of servers,  but more than one is a very common scenario due
to a merger.  Each of these should map to a domain. Thus, domain
management has to be extracted out of the LDAP backend.

Identity would contain users and groups.  Projects would contain
Projects, Roles, and Role Assignments.  Domains would contain only domains.

For people happily deploying SQL, nothing should change.  A single
Database instance can still serve all three backends.  It should only
mean removing some foreign key constraints.

For people that are deploying the current LDAP code and are happy with
the layout, we will continue to support the LDAP Project backend.


Say an organization has two LDAP servers, and also maintains a public
facing cloud backed by SQL.  Each of the two LDAP servers would have
configurations that correspond to the current layout, although limited
only to the user and group subtrees.  The domain registry  would live in
the SQL backend.  It would have two entries for the LDAP servers, and
these would be immutable.  Dynamic domain allocation and deletion would
work only

Re: [Openstack] even after deleting all container , , , it shows high disk space usges

2013-05-21 Thread David Hadas
AShish,

Your email and the problem you described puzzled me.
I think I realize what had happened, but not sure.

First a background question:
1. what is the replication ratio used in your cluster?
   You indicated an SAIO - so you must have 3 replicas, but I am still not
sure how all that had happened with 3 replicas (makes some more sense with
1)

Next, please find out where your swift installation stores the data:
`grep -r devices /etc/swift` will give you one or more directories - e.g.
on my system it is:
/etc/swift/container-server/3.conf:devices = /srv/3/node
/etc/swift/container-server/2.conf:devices = /srv/2/node
/etc/swift/container-server/4.conf:devices = /srv/4/node
/etc/swift/container-server/1.conf:devices = /srv/1/node
/etc/swift/object-server/3.conf:devices = /srv/3/node
/etc/swift/object-server/2.conf:devices = /srv/2/node
/etc/swift/object-server/4.conf:devices = /srv/4/node
/etc/swift/object-server/1.conf:devices = /srv/1/node
/etc/swift/account-server/3.conf:devices = /srv/3/node
/etc/swift/account-server/2.conf:devices = /srv/2/node
/etc/swift/account-server/4.conf:devices = /srv/4/node
/etc/swift/account-server/1.conf:devices = /srv/1/node

Next look into the different tmp files of the different disks
2. Do you see a 4.5 G file in tmp? If so please indicate what files did you
find and where.
(on my system I would be looking at /srv/*/node/*/tmp)

3. What is your disk configuration? Which disk serves as the swift device
for objects? How much disk space do you have available? You need 13.5GB to
serve a 4.5GB file with replication level of three.

See below.



 From: Study Kamaill study.i...@yahoo.com
 To: Thierry Carrez thie...@openstack.org,
 openstack@lists.launchpad.net openstack@lists.launchpad.net,
 Date: 21/05/2013 09:44 AM
 Subject: [Openstack] even after deleting all container , , , it
 shows high disk space usges
 Sent by: Openstack openstack-bounces
+davidh=il.ibm@lists.launchpad.net

 hi ,

 Can anyone help me .

 i have four storage node in my experimental setup. (all in one swift
 setup)  when i tried to  upload a file of size 4.5GB it was taking
 too long to upload so i terninated that job...

This is alarming, can you check the log files to see what went on?
Could it be that your disk became full?



 I tried to to check the container where did i upload that. It was
 showing nothing.
 Later i tried to check the disk space usages .. it shows the system
 is using 4.5 gb there on those node.

I cant get it why 4.5...
Maybe something went bad in one replica, but than if you have three, you
should have received an OK after the other two completed...
This is where I became really puzzled.

 I tried to delete all container owned by that user.

If the object is not listed in the container, this would have no affect.


 but still  i am not able to figure it out that .. how to remove all
 content from my node..

I would search it in the tmp dirs (see above) first, given that Swift never
responded with 201 Created



 regards,
 AShish


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Guest PXE Boot

2013-05-13 Thread David Hill
Hello Monty,

I haven't had the time to play with baremetal yet but it is on my todo 
list. 
I know I may be doing it wrong but when I create my Linux/Windows/ETC images, 
I'm 
using the kickstarting solution we already have in place and I was simply 
trying to PXE
boot my images from my lab as it is faster (better hardware, more memory).

I'm wondering if this patch could make it trunk?   I enjoyed the 
libvirt.xml.template before 
it got removed and found this solution (quick and easy).

I will definitely look at the baremetal (after seeing what it can do at the 
summit).

Thank you very much,

Dave


-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com] 
Sent: May-11-13 4:18 PM
To: David Hill
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Guest PXE Boot

Neat!

Have you seen any of the work around nova baremetal (which is
transitioning to be called ironic?) Related to that is a set of virtual
power drivers which allow for treating virtual machines like real
machines - so that you can use nova to pxe boot a kvm or a virtualbox or
a vmware instance.

I know it's not exactly the same thing, but I don't know what you're
trying to accomplish. Perhaps what you want is similar enough to work
together?

Monty

On 05/10/2013 12:55 PM, David Hill wrote:
 Hi guys,
 
  
 
 I was trying to PXE boot a guest for quite some time now and I think
 I've found a solution that is kind of hackish but pretty simple.   I'm
 not quite sure it's good to go in trunk but felt like I'd share it since
 I've been messing a while on this.  
 
 If anybody have a better solution, I would really like to hear/see/try it ...
 
  
 
 Here is how I did it:
 
  
 
 First, patch the libvirt/driver.py file:
 
 --- /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py.orig  
 2013-05-10 16:25:17.787862177 +
 
 +++ /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py   
 2013-05-10 16:26:39.442022870 +
 
 @@ -87,6 +87,9 @@
 
 LOG = logging.getLogger(__name__)
 
  
 
 libvirt_opts = [
 
 +cfg.StrOpt('default_guest_boot_dev',
 
 +   default='hd',
 
 +   help='Sets the default guest boot device'),
 
  cfg.StrOpt('rescue_image_id',
 
 default=None,
 
 help='Rescue ami image'),
 
 @@ -1792,7 +1795,7 @@
 
 instance['name'],
 
 ramdisk)
 
  else:
 
 -guest.os_boot_dev = hd
 
 +guest.os_boot_dev = FLAGS.default_guest_boot_dev
 
  
 
  if FLAGS.libvirt_type != lxc and FLAGS.libvirt_type != uml:
 
  guest.acpi = True
 
  
 
  
 
 And add to nova.conf:
 
 default_guest_boot_dev=network
 
  
 
 And finally add to /etc/dnsmasq.conf
 
 dhcp-boot=boot\x86\pxelinux.com,host_name,host_ip
 
 dhcp-no-override
 
  
 
 And restart dnsmasq.conf
 
  
 
 In my actual setup, the guest will PXE boot, show the menu 60 seconds
 and then boot from hard disk after the 60 seconds timeout.
 
  
 
  
 
 Thank you very much,
 
  
 
 Dave
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift questions.

2013-05-12 Thread David Hadas
Mark, Regarding your first Q:Swift evenly balance the hard-drives such that in a correctly configured system, you should expect one hard-drive being more full than the other. There is manual a mechanism in swift to balance hard-drives by moving partitions to/from hard-drive but you should need to use it under normal conditions, it is likely that if your had-drives get full the right thing to do would be to add more hard-drives.In any case you should care not about individual partitions 'getting full' as partitions are not allocated any specific space and can grow and shrink as needed as long as the hard-drive they are in have space.DHRegards, David HadasResearch Staff Member, Master InventorIBM Research Labs, HaifaTel:Int+972-4-829-6104Fax:   Int+972-4-829-6112-"Openstack" openstack-bounces+davidh=il.ibm@lists.launchpad.net wrote: -To: "openstack@lists.launchpad.net" openstack@lists.launchpad.netFrom: Mark Brown <ntdeveloper2...@yahoo.com>Sent by: "Openstack" <openstack-bounces+davidh=il.ibm@lists.launchpad.net>Date: 05/12/2013 07:50PMSubject: [Openstack] Swift questions.Hello guys,Been looking at Swift for some projects, and had some very basic questions.1. How does Swift determine a certain partition is full? And when it does detect that, what does it do? Does it return an error to the client?2. Regarding container sync, has anyone used container sync in their implementations? It would be great to know your experiences, because real world use case studies are scarce:)-- Mark___Mailing list: https://launchpad.net/~openstackPost to   : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help  : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift questions.

2013-05-12 Thread David Hadas
Mark, It would mark the server as having insufficient storage (to avoid retrying the same server for a while) and try to place the object in an alternative node - same as it would do with any other error. Once the object is placed in an alternative node, the alternative node would try to send the object back to its originated place every once in a while such that if the problem is reolved, it will move to the right place. This being said, there are N replicas - and this would repeat for each of the N replicas. So if one server is full you can still access the data using the other N-1 replicas and you still maintain N replicas in your system of the object (although one of them is misplaced until the administrator would resolve the space issue). Swift doe snot use df, it would try to allocate the space on the disk and if it fails it fails.Failure is just a natural part of live as far as swift is concerned ;)Hope this helps. DHRegards, David HadasResearch Staff Member, Master InventorIBM Research Labs, HaifaTel:Int+972-4-829-6104Fax:   Int+972-4-829-6112-Mark Brown ntdeveloper2...@yahoo.com wrote: -To: David Hadas/Haifa/IBM@IBMILFrom: Mark Brown ntdeveloper2...@yahoo.comDate: 05/12/2013 08:27PMCc: "openstack@lists.launchpad.net" openstack@lists.launchpad.netSubject: Re: [Openstack] Swift questions.Thanks for the response David.I do understand Swift, by its design, tries to keep things in balance among various nodes. I was curious what it does when it encounters a full partition(say hard disk is full)? Lets just say it is balanced and the all nodes are nearing capacity. If I dont add any nodes, what happens when it tries to write on a specific node (which it was directed to based on the hashing ring) and there is not enough space to write the object?Also, what does it use to determine a full partition? Does it use a df?MarkFrom: David Hadas dav...@il.ibm.com To: Mark Brown ntdeveloper2...@yahoo.com Cc: "openstack@lists.launchpad.net" openstack@lists.launchpad.net  Sent: Sunday, May 12, 2013 10:36 PM Subject: Re: [Openstack] Swift questions.   Mark, Regarding your first Q:Swift evenly balance the hard-drives such that in a correctly configured system, you should expect one hard-drive being more full than the other. There is manual a mechanism in swift to balance hard-drives by moving partitions
 to/from hard-drive but you should need to use it under normal conditions, it is likely that if your had-drives get full the right thing to do would be to add more hard-drives.In any case you should care not about individual partitions 'getting full' as partitions are not allocated any specific space and can grow and shrink as needed as long as the hard-drive they are in have space.DHRegards, David HadasResearch Staff Member, Master InventorIBM Research Labs, HaifaTel:Int+972-4-829-6104Fax:   Int+972-4-829-6112-"Openstack" openstack-bounces+davidh=il.ibm@lists.launchpad.net wrote: -To: "openstack@lists.launchpad.net" openstack@lists.launchpad.netFrom: Mark Brown Sent by: "Openstack" Date:
 05/12/2013 07:50PMSubject: [Openstack] Swift questions.Hello guys,Been looking at Swift for some projects, and had some very basic questions.1. How does Swift determine a certain partition is full? And when it does detect that, what does it do? Does it return an error to the client?2. Regarding container sync, has anyone used container sync in their implementations? It would be great to know your experiences, because real world use case studies are scarce:)-- Mark___Mailing list: https://launchpad.net/~openstackPost to   : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help  : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift questions.

2013-05-12 Thread David Hadas
 Handoff is chosen based on the Ring at the proxy, after the object server responding that it had failed to store the data or after any other error while attempting to approach the object sever (e.g. timeout). DHRegards, David HadasResearch Staff Member, Master InventorIBM Research Labs, HaifaTel:Int+972-4-829-6104Fax:   Int+972-4-829-6112-Mark Brown ntdeveloper2...@yahoo.com wrote: -To: David Hadas/Haifa/IBM@IBMILFrom: Mark Brown ntdeveloper2...@yahoo.comDate: 05/12/2013 08:55PMCc: "openstack@lists.launchpad.net" openstack@lists.launchpad.netSubject: Re: [Openstack] Swift questions.Thanks again, David. Definitely helps.Is the alternative node you refer to here the "handoff" node? Is the handoff node something that is in the ring database? I am trying to piece together where in the stack this would happen. If it is transparent, it would probably happen in the object server somehow, but it would need to know where the handoff node is.-- Mark.From: David Hadas dav...@il.ibm.com To: Mark Brown ntdeveloper2...@yahoo.com Cc: "openstack@lists.launchpad.net" openstack@lists.launchpad.net  Sent: Sunday, May 12, 2013 11:13 PM Subject: Re: [Openstack] Swift questions.   Mark, It would mark the server as having insufficient storage (to avoid retrying the same server for a while) and try to place the object in an alternative node - same as it would do with any other error. Once the object is placed in an alternative node, the alternative node would try to send the object back to its originated place every once in a while such that if the problem is reolved, it will move to the right place. This being said, there are N replicas - and this would repeat for each of the N replicas. So if one server is full you can still access the data using the other N-1 replicas and you still maintain N replicas in your system of the object (although one of them is misplaced until the administrator would
 resolve the space issue). Swift doe snot use df, it would try to allocate the space on the disk and if it fails it fails.Failure is just a natural part of live as far as swift is concerned ;)Hope this helps. DHRegards, David HadasResearch Staff Member, Master InventorIBM Research Labs, HaifaTel:Int+972-4-829-6104Fax:   Int+972-4-829-6112-Mark Brown ntdeveloper2...@yahoo.com wrote: -To: David Hadas/Haifa/IBM@IBMILFrom: Mark Brown ntdeveloper2...@yahoo.comDate: 05/12/2013 08:27PMCc: "openstack@lists.launchpad.net" openstack@lists.launchpad.netSubject: Re: [Openstack] Swift questions.Thanks for the response David.I do understand Swift, by its design, tries to keep things in balance among various nodes. I was curious what it does when it encounters a full partition(say hard disk is full)? Lets just say it is balanced and the all nodes are nearing capacity. If I dont add any nodes, what happens when it tries to write on a specific node (which it was directed to based on the hashing ring) and there is not enough space to write the object?Also, what does it use to determine a full partition? Does it use a df?MarkFrom: David Hadas dav...@il.ibm.com To: Mark Brown ntdeveloper2...@yahoo.com Cc: "openstack@lists.launchpad.net" openstack@lists.launchpad.net  Sent: Sunday, May 12, 2013 10:36 PM Subject: Re: [Openstack] Swift questions.   Mark, Regarding your first Q:Swift evenly balance the hard-drives such that in a correctly configured system, you should expect one hard-drive being more full than the other. There is manual a mechanism in swift to balance hard-drives by moving partitions
 to/from hard-drive but you should need to use it under normal conditions, it is likely that if your had-drives get full the right thing to do would be to add more hard-drives.In any case you should care not about individual partitions 'getting full' as partitions are not allocated any specific space and can grow and shrink as needed as long as the hard-drive they are in have space.DHRegards, David HadasResearch Staff Member, Master InventorIBM Research Labs, HaifaTel:Int+972-4-829-6104Fax:   Int+972-4-829-6112-"Openstack" openstack-bounces+davidh=il.ibm@lists.launchpad.net wrote: -To: "openstack@lists.launchpad.net" openstack@lists.launchpad.netFrom: Mark Brown Sent by: "Openstack" Date:
 05/12/2013 07:50PMSubject: [Openstack] Swift questions.Hello guys,Been looking at Swift for some projects, and had some very basic questions.1. How does Swift determine a certain partition is full? And when it does detect that, what does it do? Does it return an error to the client?2. Regarding container sync, has anyone used container sync in their implementations? It would be great to know your experiences, because real world use case studies are scarce:)-- Mark___Mailing list: https://launchpad.net/~openstackPost to   : openstack@lists.launchpad.netUnsubscribe : 

Re: [Openstack] New code name for networks

2013-05-11 Thread David Shrewsbury
quark

Keeps the sciency theme going, same first two letters, and represents something
fundamental to other sciency stuff (much like networking is fundamental to
OpenStack).

http://en.wikipedia.org/wiki/Quark

-Dave


On May 11, 2013, at 4:13 PM, Monty Taylor mord...@inaugust.com wrote:

 Jeremy Stanly on IRC just suggested kumquat... but to that I respond:
 
 qumkuat
 
 Same benefits as qumutna - except it's more pronouncable.
 
 On 05/11/2013 04:07 PM, Monty Taylor wrote:
 I have been arguing for:
 
 mutnuaq
 
 Granted, it takes a minute to learn how to type, but it's just a little
 snarky, and it takes up the exact same number of letter. However, it
 does screw with sorting. SO - what about:
 
 qumutna
 
 It's a little bit easier to wrap your head around, it's still clearly an
 homage, and it should be super easy to bulk cut/replace.
 
 On 05/11/2013 03:58 PM, Davanum Srinivas wrote:
 Lattice
 
 -- dims
 
 On Sat, May 11, 2013 at 3:52 PM, Mark Turner m...@amerine.net wrote:
 Tubes
 
 ;-)
 
 
 On Sat, May 11, 2013 at 12:51 PM, Jason Smith jason.sm...@rackspace.com
 wrote:
 
 Hello,
 I understand why we had to give up Quantum code name but rather than just
 refer to it as networking let's come up with a new code name!
 
 Thoughts?
 
 Thanks,
 -js
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] New code name for networks

2013-05-11 Thread David Shrewsbury
Same first THREE letters actually. Bonus points.


On May 11, 2013, at 7:07 PM, David Shrewsbury shrewsbury.d...@gmail.com wrote:

 quark
 
 Keeps the sciency theme going, same first two letters, and represents 
 something
 fundamental to other sciency stuff (much like networking is fundamental to
 OpenStack).
 
 http://en.wikipedia.org/wiki/Quark
 
 -Dave
 
 
 On May 11, 2013, at 4:13 PM, Monty Taylor mord...@inaugust.com wrote:
 
 Jeremy Stanly on IRC just suggested kumquat... but to that I respond:
 
 qumkuat
 
 Same benefits as qumutna - except it's more pronouncable.
 
 On 05/11/2013 04:07 PM, Monty Taylor wrote:
 I have been arguing for:
 
 mutnuaq
 
 Granted, it takes a minute to learn how to type, but it's just a little
 snarky, and it takes up the exact same number of letter. However, it
 does screw with sorting. SO - what about:
 
 qumutna
 
 It's a little bit easier to wrap your head around, it's still clearly an
 homage, and it should be super easy to bulk cut/replace.
 
 On 05/11/2013 03:58 PM, Davanum Srinivas wrote:
 Lattice
 
 -- dims
 
 On Sat, May 11, 2013 at 3:52 PM, Mark Turner m...@amerine.net wrote:
 Tubes
 
 ;-)
 
 
 On Sat, May 11, 2013 at 12:51 PM, Jason Smith jason.sm...@rackspace.com
 wrote:
 
 Hello,
 I understand why we had to give up Quantum code name but rather than just
 refer to it as networking let's come up with a new code name!
 
 Thoughts?
 
 Thanks,
 -js
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Guest PXE Boot

2013-05-10 Thread David Hill
Hi guys,

I was trying to PXE boot a guest for quite some time now and I think I've 
found a solution that is kind of hackish but pretty simple.   I'm not quite 
sure it's good to go in trunk but felt like I'd share it since I've been 
messing a while on this.
If anybody have a better solution, I would really like to hear/see/try it ...

Here is how I did it:

First, patch the libvirt/driver.py file:
--- /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py.orig   
2013-05-10 16:25:17.787862177 +
+++ /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py
2013-05-10 16:26:39.442022870 +
@@ -87,6 +87,9 @@
LOG = logging.getLogger(__name__)

libvirt_opts = [
+cfg.StrOpt('default_guest_boot_dev',
+   default='hd',
+   help='Sets the default guest boot device'),
 cfg.StrOpt('rescue_image_id',
default=None,
help='Rescue ami image'),
@@ -1792,7 +1795,7 @@
instance['name'],
ramdisk)
 else:
-guest.os_boot_dev = hd
+guest.os_boot_dev = FLAGS.default_guest_boot_dev

 if FLAGS.libvirt_type != lxc and FLAGS.libvirt_type != uml:
 guest.acpi = True


And add to nova.conf:
default_guest_boot_dev=network

And finally add to /etc/dnsmasq.conf
dhcp-boot=boot\x86\pxelinux.com,host_name,host_ip
dhcp-no-override

And restart dnsmasq.conf

In my actual setup, the guest will PXE boot, show the menu 60 seconds and then 
boot from hard disk after the 60 seconds timeout.


Thank you very much,

Dave
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift container's rwx permissions

2013-04-29 Thread David Dobbins
Clay-

You are correct – the hadoop-swift filesystem is just an implement of the 
Hadoop FileSystem class that uses swift for storage.  As such, it's just 
translating filesystem calls into HTTP requests into swift and is not accessing 
the filesystem of the object servers directly.

Shashank-

I'm curious about what version of that Hadoop-Swift-integration project you're 
trying to run.  You shouldn't have been able to create containers with it in 
any of the more recent versions.  You might want to try the converged branch 
of https://github.com/hortonworks/Hadoop-and-Swift-integration.  This is the 
branch that's getting submitted back to Apache for inclusion in Hadoop.

Hope that helps,
-David

From: Vaidy Gopalakrishnan gva...@gmail.commailto:gva...@gmail.com
Date: Friday, April 26, 2013 4:06 PM
To: Clay Gerrard clay.gerr...@gmail.commailto:clay.gerr...@gmail.com, David 
Dobbins david.dobb...@rackspace.commailto:david.dobb...@rackspace.com
Cc: Shashank Sahni shredde...@gmail.commailto:shredde...@gmail.com, 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] Swift container's rwx permissions


[Hi Clay]

Including David who can answer this much better.

Vaidy


On Fri, Apr 26, 2013 at 12:30 PM, Clay Gerrard 
clay.gerr...@gmail.commailto:clay.gerr...@gmail.com wrote:
Wow, so I glanced quickly at the github project to try and get some context, 
but I think I actually start getting *more* confused when I see swift in the 
same class name as file-system ;)

I'd like you (or maybe vaidy, hi vaidy!) to correct me if I'm wrong, but this 
hadoop integration will *not* access the filesystem of the object servers 
directly?  Everything will happen on a pool of processing boxes that will talk 
to swift via HTTP - same as any other client?

In that case, the error message is just a leaky abstraction showing through.  
HDFS probably has permission errors that it tries to helpfully map back to file 
system constructs which just don't apply when you're trying to simulate a 
file system on object storage.  You'll have to get down the the HTTP to 
understand what's causing the error.  Presumably a 401 from Swift, so access to 
swift logs would be helpful.

OTOH, if we're * actually* talking about filesystem permissions; then I'm 
totally confused.  But ACL's definitely won't help.  They're just a row sitting 
in a sqlite database - probably on a totally different server from where the 
one replica of this object is sitting on the filesystem. Nothing you can set in 
the api will change the filesystem permissions of the directory structure or 
files on the object servers.

Maybe do you have some more overview info on the general approach?  I don't 
really have any Hadoop experience, so maybe it'd be better if there's a hadoop 
expert out there that also has some experience with swift and can help get you 
on the right track...

-Clay



In


On Fri, Apr 26, 2013 at 1:11 AM, Shashank Sahni 
shredde...@gmail.commailto:shredde...@gmail.com wrote:
Hi everyone,

I've been experimenting with using Swift(1.8) as Hadoop's DFS backend.

https://github.com/DmitryMezhensky/Hadoop-and-Swift-integration

After a few glitches, I'm able to create/access/delete objects/containers using 
hadoop's cli fs tool. But whenever I'm trying to run the job it fails with the 
following error.

ERROR security.UserGroupInformation: PriviledgedActionException as:dharmesh 
cause:java.io.IOException: The ownership/permissions on the staging directory 
swift://hadooptest.rackspace/tmp/app/mapred/staging/dharmesh/.staging is not as 
expected. It is owned by and permissions are rwxrwxrwx. The directory must be 
owned by the submitter dharmesh or by dharmesh and permissions must be rwx--

Note that, the local system username is dharmesh and the openstack account 
and associated tenant is dharmesh too.

I tried setting the permissions by creating tmp container using swift post 
-r 'dharmesh:dharmesh', but unfortunately ended up with the same result. Is 
there an other way to set rwx ACLs in swift?

--
Shashank Sahni

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Bridging question

2013-04-26 Thread David Wittman
Daniel,

This is the expected behavior. With nova-network, FLIPs are assigned as a
secondary address on the host interface, and traffic is routed to your
instances via NAT rules. I'd recommend reading the following blog post from
Mirantis for more information:

http://www.mirantis.com/blog/configuring-floating-ip-addresses-networking-openstack-public-private-clouds/

-Dave


On Fri, Apr 26, 2013 at 4:58 PM, Daniel Ellison dan...@syrinx.net wrote:

 Hi all,

 I have Nova all set up on a single server and am able to start/stop/delete
 VM instances no problem. I have a bridge at br100 which sits on eth1 and is
 not connected to anything. eth0 is connected to the Internet. Before
 installing Openstack I was using KVM and virsh to manage my VMs. In order
 to do the Openstack install with fewer working parts, I brought down all
 KVM instances and deleted the br0 bridge they were using.

 Everything works beautifully with respect to nova-network. Since I can't
 easily port my KVM instances to Openstack, I wanted to start them up again
 under virsh. I recreated the br0 bridge as it was before. So far so good. I
 can start my legacy VMs and all works as expected. There's only one
 issue, and I don't even know if it's important.

 Before starting a Nova VM eth0 has no IP, which is expected as it's being
 covered by br0. But when I start one of the Nova VMs that has a floating
 IP, eth0 gains its IP! Everything seems to continue working, but it doesn't
 make sense to me.

 I don't know if this is expected behaviour or I simply have things
 configured wrong. Since it all still works I'm not overly concerned, but it
 does bug me. If anyone has insight into this I would be grateful.

 Thanks,
 Daniel
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Accessing object data and metadata in Swift middleware

2013-04-22 Thread David Goetz
There are examples in swift.common.middleware of doing this.  

If you want to try changing the metadata on the way out you can look at: 

https://github.com/openstack/swift/blob/master/swift/common/middleware/staticweb.py#L367-L384

it makes use of the WSGIContext class which allows you to make a call down the 
pipeline and respond to it on the way back out.

If you want to just kinda peek at the object before sending the request you can 
use make_pre_authed_request as done here for containers:

https://github.com/openstack/swift/blob/master/swift/common/middleware/staticweb.py#L198-L201

that function will take auth out of the environment so you want to be careful 
about using it. It you want to keep auth you can do something along the lines 
of:

https://github.com/openstack/swift/blob/master/swift/common/middleware/bulk.py#L250-L259

which just makes a sub request using a copy of the current environment. In your 
case, after you get that response you'd probably just want to let the request 
continue on the pipeline instead of just completely overriding it like the bulk 
middleware does.

David





On Apr 21, 2013, at 10:11 AM, Itamar O wrote:

 Hello list,
 I am new to OpenStack development, and trying to implement a simple Swift 
 middleware.
 I was able to successfully manipulate a PUT request for an object, processing 
 the data that was uploaded by the request and storing some information in the 
 object metadata.
 But now I am struggling with handling GET requests for objects.
 I would like to access the data and metadata of the requested object before 
 it is passed down the pipeline, but I have no clue how to achieve this.
 
 In case this is not the appropriate mailing list for this question, I 
 apologize, and would appreciate if someone could refer me to the correct list.
 Otherwise, any advice will be much appreciated!
 
 Thanks,
 - Itamar.
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] json - Static Large Objects

2013-04-17 Thread David Goetz
Here's a little script I have to test it out on my SAIO:

#/bin/bash
export PASS='AUTH_tk36accf5200b143dd8883b9841965e6a2'
export URL='http://127.0.0.1:8080/v1/AUTH_dfg'

curl -i -H X-Auth-Token: $PASS $URL/hat -XPUT

curl -i -H X-Auth-Token: $PASS $URL/hat/one -XPUT -d '1'

curl -i -H X-Auth-Token: $PASS $URL/hat/two -XPUT -d '2'

echo `python -c 'import simplejson; print simplejson.dumps([{path: 
/hat/one, etag: b0baee9d279d34fa1dfd71aadb908c3f, size_bytes: 5}, 
{path: /hat/two, etag: 3d2172418ce305c7d16d4b05597c6a59, size_bytes: 
5}])'` | curl -i -H X-Auth-Token: $PASS $URL/hat/man?multipart-manifest=put 
-XPUT -Hcontent-type:text/plain -T -


you'd just need to switch out the PASS and URL with whatever you're using.  It 
creates a SLO object in $URL/hat/man. Oh- you'd also need to change your 
minimum segment size in your /etc/swift/proxy-server.conf if you wanted this to 
work… something like this:

[filter:slo]
use = egg:swift#slo
min_segment_size = 1


I also added support for Static Large Objects in python-swiftclient: 
https://github.com/openstack/python-swiftclient for example:

swift upload testcontainer testfile -S 1048576 --use-slo

creates a SLO object with 1MB segments.

David


On Apr 17, 2013, at 1:22 PM, david.loy wrote:

 This is my first post to this list so this may not be the appropriate place 
 to ask this question:
 
 I am trying to  upload a Static Large Object and have not been successful. I 
 believe the problem is the json format I'm using.
 
 The document description:
 http://docs.openstack.org/developer/swift/misc.html#deleting-a-large-object
 
 shows:
 
 json:
 [{path: /cont/object,
  etag: etagoftheobjectsegment,
  size_bytes: 1048576}, ...]
 
 which is not legal json.
 
 If anyone can send me a working json example for SLO I would appreciate. If 
 XML is supported,
 that would also be useful.
 
 Any help would really be appreciated.
 
 Thanks
 David
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Anybody implemented DMZ?

2013-04-12 Thread David Kang

 I did some experiment with two subnets - one for DMZ, the other 
for non-DMZ. But, it looks like that separation of network traffic between them
doesn't work with two quantum routers.

 We use linux-bridge plugin.
Network name space is not supported.

 When two subnets (e.g. 10.12.83.0/24, 10.12.84.0/24) are created, 
the Quantum network node has ports to both subnets(10.12.83.1/24, 
10.12.84.1/24).
Two quantum routers were created for each subnets.
Pinging from a VM in 10.12.83.0/24 to a VM in 10.12.84.0/24 is routed by
the Quantum network node itself.
Before Quantum router routes the packets to the external network,
the Quantum network node routes internally because it knows both network.
I want the traffic to be routed to the external network through the
Quantum router. But it doesn't happen.

 Am I doing something wrong?

 Thanks,
 David


- Original Message -
 In my reply I suggested you to create two quantum routers which I
 believe should solve this for you.
 
 
 
 
 quantum net-create DMZ-net --external=True
 quantum subnet-create --name DMZ-Subnet1 DMZ-net dmz_cidr # Public
 ip pool
 
 quantum net-create non-DMZ --external=True
 quantum subnet-create --name nonDMZ-Subnet1 non-DMZ dmz_cidr #
 Public ip pool
 
 
 
 
 
 quantum router-create DMZ-router
 quantum router-create non-DMZ-router
 quantum router-interface-add DMZ-router DMZ DMZ-Subnet1
 quantum router-interface-add non-DMZ-router nonDMZ-Subnet1
 
 
 quantum router-gateway-set DMZ-router DMZ-net
 quantum router-gateway-set non-DMZ-router non-DMZ
 
 
 
 
 On Thu, Apr 4, 2013 at 10:51 AM, David Kang  dk...@isi.edu  wrote:
 
 
 
 
 Hi Aron,
 
 Thank you for your reply.
 
 We deploy one (quantum) subnet as a DMZ network and the other
 (quantum) subnet
 as a non-DMZ network.
 They are routed to the network node where quantum services (dhcp, l3,
 linuxbridge)
 are running.
 They can talk each other through network node, now.
 
 However, we do not want to the network node to route the traffic
 between them directly.
 Instead we want them to be routed to different (external) routers such
 that
 we can apply filtering/firewall/etc. on the traffic from DMZ network.
 
 Do you think is it possible using two l3-agents or any other way?
 Currently, I manually set up routings for those two subnets.
 
 Thanks,
 David
 
 
 
 - Original Message -
  Hi David,
 
 
  The quantum network node would route traffic between the non-DMZ-DMZ
  network if both of those subnets are uplinked to the same quantum
  router. I believe if you create another router for your dmz hosts
  then
  traffic in/out of that network should route our to your physical
  infrastructure which will go through your router to do filtering.
 
 
  Thanks,
 
 
  Aaron
 
 
 
  On Wed, Apr 3, 2013 at 8:26 AM, David Kang  dk...@isi.edu  wrote:
 
 
 
  Hi,
 
  We are trying to set up Quantum network for non-DMZ and DMZ
  networks.
  The cloud has both non-DMZ networks and a DMZ network.
  We need to route traffic from DMZ network to a specific router
  before
  it reaches
  anywhere else in non-DMZ networks.
  However, Quantum Network Node routes the traffic between DMZ network
  and
  non-DMZ network within itself by default.
  Have anybody configured Quantum for this case?
  Any help will be appreciated.
  We are using Quantum linuxbridge-agent.
 
  Thanks,
  David
 
  --
  --
  Dr. Dong-In David Kang
  Computer Scientist
  USC/ISI
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help : https://help.launchpad.net/ListHelp
 
 --
 --
 Dr. Dong-In David Kang
 Computer Scientist
 USC/ISI

-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Quantum] Anybody implemented DMZ?

2013-04-04 Thread David Kang

 
 Hi Aron,

 Thank you for your reply.

 We deploy one (quantum) subnet as a DMZ network and the other (quantum) subnet
as a non-DMZ network.
They are routed to the network node where quantum services (dhcp, l3, 
linuxbridge)
are running.
They can talk each other through network node, now.

 However, we do not want to the network node to route the traffic between them 
directly.
Instead we want them to be routed to different (external) routers such that
we can apply filtering/firewall/etc. on the traffic from DMZ network.

 Do you think is it possible using two l3-agents or any other way?
Currently, I manually set up routings for those two subnets.

 Thanks,
 David

- Original Message -
 Hi David,
 
 
 The quantum network node would route traffic between the non-DMZ-DMZ
 network if both of those subnets are uplinked to the same quantum
 router. I believe if you create another router for your dmz hosts then
 traffic in/out of that network should route our to your physical
 infrastructure which will go through your router to do filtering.
 
 
 Thanks,
 
 
 Aaron
 
 
 
 On Wed, Apr 3, 2013 at 8:26 AM, David Kang  dk...@isi.edu  wrote:
 
 
 
 Hi,
 
 We are trying to set up Quantum network for non-DMZ and DMZ networks.
 The cloud has both non-DMZ networks and a DMZ network.
 We need to route traffic from DMZ network to a specific router before
 it reaches
 anywhere else in non-DMZ networks.
 However, Quantum Network Node routes the traffic between DMZ network
 and
 non-DMZ network within itself by default.
 Have anybody configured Quantum for this case?
 Any help will be appreciated.
 We are using Quantum linuxbridge-agent.
 
 Thanks,
 David
 
 --
 --
 Dr. Dong-In David Kang
 Computer Scientist
 USC/ISI
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp

-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum] Anybody implemented DMZ?

2013-04-03 Thread David Kang

 Hi,

 We are trying to set up Quantum network for non-DMZ and DMZ networks.
The cloud has both non-DMZ networks and a DMZ network.
We need to route traffic from DMZ network to a specific router before it reaches
anywhere else in non-DMZ networks.
However, Quantum Network Node routes the traffic between DMZ network and
non-DMZ network within itself by default.
Have anybody configured Quantum for this case?
Any help will be appreciated.
We are using Quantum linuxbridge-agent.

 Thanks,
 David

-- 
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] DHCP release

2013-03-25 Thread David Hill
Hi guys,

I've found what the problem is...


1)  nova-dhcpbridge wasn't configured

2)  The version provided with redhat seems to be buggy and I created a bug 
report: https://bugzilla.redhat.com/show_bug.cgi?id=927349



With the provided version, dhcp-script is called only once at dnsmasq startup 
and never at IP ack/release.
I've found a package (dnsmasq-2.62-1.el6.rfx.x86_64.rpm) that is more recent 
and everything is behaving as expected.

Dave




From: Nathanael Burton [mailto:nathanael.i.bur...@gmail.com]
Sent: March-23-13 4:15 PM
To: David Hill
Cc: openstack@lists.launchpad.net; Robert Collins
Subject: Re: [Openstack] DHCP release


On Mar 23, 2013 4:02 AM, David Hill 
david.h...@ubisoft.commailto:david.h...@ubisoft.com wrote:


 
 From: Robert Collins 
 [robe...@robertcollins.netmailto:robe...@robertcollins.net]
 Sent: March 23, 2013 02:21
 To: David Hill
 Cc: Kevin Stevens; 
 openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
 Subject: Re: [Openstack] DHCP release

 On 23 March 2013 14:53, David Hill 
 david.h...@ubisoft.commailto:david.h...@ubisoft.com wrote:
  Hello Kevin,
 
  Thanks for replying to my question.   I was asking that question because if 
  we go there: 
  http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-vlan-networking.html
and look at the very bottom of the page, it suggests the following:
 
  # release leases immediately on terminate
  force_dhcp_release=true? (did I miss something?)
  # one week lease time
  dhcp_lease_time=604800
  # two week disassociate timeout
  fixed_ip_disassociate_timeout=1209600
 
  I tried that and if you have at some creation/destruction of virtual 
  machines, let's say 2046 in the same week, you'll end up burning the 2046 
  IPs because they're never disassociated.  At some point, nova-network 
  complains with no more fixed IP are available.  Changing 
  fixed_ip_disassociate_timeout to something smaller solves this issue.
  Is there any reasons why fixed_ip_disassociate_timeout should be bigger 
  than dhcp_lease_time?
 
  Also, I thought that by destroying a virtual machine, it would 
  release/disassociate the IP from the UUID since it has been destroyed 
  (DELETED).  I've turned on the debugging and with 
  fixed_ip_disassociate_timeout set to 600 seocnds, it disassociate stale IPs 
  after they've been deleted for at least 600 seconds.  Is it a bug in our 
  setup/nova-network or nova-network/manage relies on the periodic task that 
  disassociate stale IPs in order to regain those IPs?
 
  Finaly, wouldn't it be better to simply disassociate a released IP as soon 
  as the VM is deleted?  Since we deleted the VM, why keep it in the database?

 When you reuse an IP address you run the risk of other machines that
 have the IP cached (e.g. as DNS lookup result, or because they were
 configured to use it as a service endpoint) talking to the wrong
 machine. The long timeout is to prevent the sort of confusing hard to
 debug errors that that happen when machine A is replaced by machine C
 on A's IP address.

 My 2c: just make your pool larger. Grab 10/8 and have 16M ip's to play with.

 -Rob

 I'm not the network guy here but, if I use a 10/8 and that we already have 
 10/8 in our internal network, this could easily become a problem am I 
 wrong?

 Also, if a VM is deleted, IMHO, it's destroyed with all it's network. I 
 don't know if this is old minding or anything,  but when I destroy a VM in 
 VSphere,  I expect it to disappear leaving no trace.   This is the cloud, and 
 when I delete something,  I expect it to simply be deleted.

 My 2c, but I see your point and have nothing against it.


 Dave


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : 
 openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

David,

I believe the biggest reason for the long timeout is historical based on bugs 
in dnsmasq [1].  You can probably just use the default of 600 now if you're 
using a new enough version of dnsmasq.

[1] - https://lists.launchpad.net/openstack/msg11696.html

Thanks,

Nate
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] DHCP release

2013-03-23 Thread David Hill


From: Robert Collins [robe...@robertcollins.net]
Sent: March 23, 2013 02:21
To: David Hill
Cc: Kevin Stevens; openstack@lists.launchpad.net
Subject: Re: [Openstack] DHCP release

On 23 March 2013 14:53, David Hill david.h...@ubisoft.com wrote:
 Hello Kevin,

 Thanks for replying to my question.   I was asking that question because if 
 we go there: 
 http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-vlan-networking.html
   and look at the very bottom of the page, it suggests the following:

 # release leases immediately on terminate
 force_dhcp_release=true? (did I miss something?)
 # one week lease time
 dhcp_lease_time=604800
 # two week disassociate timeout
 fixed_ip_disassociate_timeout=1209600

 I tried that and if you have at some creation/destruction of virtual 
 machines, let's say 2046 in the same week, you'll end up burning the 2046 IPs 
 because they're never disassociated.  At some point, nova-network complains 
 with no more fixed IP are available.  Changing 
 fixed_ip_disassociate_timeout to something smaller solves this issue.
 Is there any reasons why fixed_ip_disassociate_timeout should be bigger than 
 dhcp_lease_time?

 Also, I thought that by destroying a virtual machine, it would 
 release/disassociate the IP from the UUID since it has been destroyed 
 (DELETED).  I've turned on the debugging and with 
 fixed_ip_disassociate_timeout set to 600 seocnds, it disassociate stale IPs 
 after they've been deleted for at least 600 seconds.  Is it a bug in our 
 setup/nova-network or nova-network/manage relies on the periodic task that 
 disassociate stale IPs in order to regain those IPs?

 Finaly, wouldn't it be better to simply disassociate a released IP as soon as 
 the VM is deleted?  Since we deleted the VM, why keep it in the database?

When you reuse an IP address you run the risk of other machines that
have the IP cached (e.g. as DNS lookup result, or because they were
configured to use it as a service endpoint) talking to the wrong
machine. The long timeout is to prevent the sort of confusing hard to
debug errors that that happen when machine A is replaced by machine C
on A's IP address.

My 2c: just make your pool larger. Grab 10/8 and have 16M ip's to play with.

-Rob

I'm not the network guy here but, if I use a 10/8 and that we already have 10/8 
in our internal network, this could easily become a problem am I wrong?

Also, if a VM is deleted, IMHO, it's destroyed with all it's network. I 
don't know if this is old minding or anything,  but when I destroy a VM in 
VSphere,  I expect it to disappear leaving no trace.   This is the cloud, and 
when I delete something,  I expect it to simply be deleted.

My 2c, but I see your point and have nothing against it.


Dave


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] DHCP release

2013-03-22 Thread David Hill
Hi guys,

I'm experiencing some kind of weird behaviour with our 
openstack setup here.
Let me explain:
I create an instance that gets an IP: 172.0.0.3
I destroy the instance.
I recreate an instance that will get another IP: 172.0.0.4.

If I wait 600 seconds between each test, 172.0.0.3 will be attributed again 
instead of 172.0.0.4.

Would it be possible that the IP de-allocation relies on the periodic task to 
do some clean up?

I'm asking because actually this doesn't work:
force_dhcp_release=true
dhcp_lease_time=604800
fixed_ip_disassociate_timeout=1209600

If I do this and stress test my lab, I will eventually run out of IPs!

But this works:
force_dhcp_release=true
dhcp_lease_time=604800
fixed_ip_disassociate_timeout=600

I will eventually start seeing my previously attributed IP address instead of 
running out of IPs.

Am I reading an old document that is outdated?

Thank you very much,

Dave




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] DHCP release

2013-03-22 Thread David Hill
Hello Kevin,

Thanks for replying to my question.   I was asking that question because if we 
go there: 
http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-vlan-networking.html
  and look at the very bottom of the page, it suggests the following:

# release leases immediately on terminate
force_dhcp_release=true? (did I miss something?)
# one week lease time
dhcp_lease_time=604800
# two week disassociate timeout
fixed_ip_disassociate_timeout=1209600

I tried that and if you have at some creation/destruction of virtual machines, 
let's say 2046 in the same week, you'll end up burning the 2046 IPs because 
they're never disassociated.  At some point, nova-network complains with no 
more fixed IP are available.  Changing fixed_ip_disassociate_timeout to 
something smaller solves this issue.
Is there any reasons why fixed_ip_disassociate_timeout should be bigger than 
dhcp_lease_time?

Also, I thought that by destroying a virtual machine, it would 
release/disassociate the IP from the UUID since it has been destroyed 
(DELETED).  I've turned on the debugging and with fixed_ip_disassociate_timeout 
set to 600 seocnds, it disassociate stale IPs after they've been deleted for at 
least 600 seconds.  Is it a bug in our setup/nova-network or 
nova-network/manage relies on the periodic task that disassociate stale IPs in 
order to regain those IPs?   

Finaly, wouldn't it be better to simply disassociate a released IP as soon as 
the VM is deleted?  Since we deleted the VM, why keep it in the database?

Thank you very much,

Dave

From: Kevin Stevens [kevin.stev...@rackspace.com]
Sent: March 22, 2013 18:01
To: David Hill; openstack@lists.launchpad.net
Subject: Re: [Openstack] DHCP release

David,

Maybe I misunderstand your question but I would expect this behavior.  The 
force_dhcp_release flag says 'send a DHCP release to the DHCP server'.  This 
doesn't mean that the IP is immediately available for use as it is still 
associated with the instance UUID in the nova database. The 
fixed_ip_disassociate_timeout flag disassociates the IP from the relevant 
instance  in the nova.fixed_ips table after the specified time.

Useful link:
http://docs.openstack.org/folsom/openstack-compute/admin/content/list-of-compute-config-options.html

Thanks,
Kevin
Rackspace

From: David Hill david.h...@ubisoft.commailto:david.h...@ubisoft.com
Date: Friday, March 22, 2013 12:59 PM
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] DHCP release

Hi guys,

I’m experiencing some kind of weird behaviour with our 
openstack setup here.
Let me explain:
I create an instance that gets an IP: 172.0.0.3
I destroy the instance.
I recreate an instance that will get another IP: 172.0.0.4.

If I wait 600 seconds between each test, 172.0.0.3 will be attributed again 
instead of 172.0.0.4.

Would it be possible that the IP de-allocation relies on the periodic task to 
do some clean up?

I’m asking because actually this doesn’t work:
force_dhcp_release=true
dhcp_lease_time=604800
fixed_ip_disassociate_timeout=1209600

If I do this and stress test my lab, I will eventually run out of IPs!

But this works:
force_dhcp_release=true
dhcp_lease_time=604800
fixed_ip_disassociate_timeout=600

I will eventually start seeing my previously attributed IP address instead of 
running out of IPs.

Am I reading an old document that is outdated?

Thank you very much,

Dave




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] register a new panel in overrides.py

2013-03-20 Thread Lyle, David (Cloud Services)
There's a couple of changes that you need to make...

First, edit the overrides.py file:  (e.g., if we wanted to add the panel to the 
admin dashboard so this uses the admin dashboard slug: 'admin')

import horizon
from path_to_module.panel import YourNewPanelClass 

admin_dashboard = horizon.get_dashboard(admin)
admin_dashboard.register(YourNewPanelClass)


Next, make sure your overrides.py file is being called in your settings.py

HORIZON_CONFIG = {
dashboards = ('project', 'admin','settings'),
...,
'customization_module': 'your_base_module.overrides'
}

-Dave

-Original Message-
From: openstack-bounces+david.lyle=hp@lists.launchpad.net 
[mailto:openstack-bounces+david.lyle=hp@lists.launchpad.net] On Behalf Of 
Wyllys Ingersoll
Sent: Wednesday, March 20, 2013 9:50 AM
To: openstack@lists.launchpad.net
Subject: [Openstack] register a new panel in overrides.py


Can someone give a pointer to how one goes about adding a new panel to an 
existing panel using overrides.py ?

I know my panel is working because if I hardcode it into an existing 
dashboard.py file, it is found and displayed.  I'd prefer to put it in 
overrides.py instead and am wondering how that would be coded.

thanks,
  Wyllys


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] register a new panel in overrides.py

2013-03-20 Thread Lyle, David (Cloud Services)
But you should be registering the Panel like
settings.register(EC2ListPanel)
or settings.register(ec2list.EC2ListPanel)

not ec2list

-Dave

-Original Message-
From: Wyllys Ingersoll [mailto:wyllys.ingers...@evault.com] 
Sent: Wednesday, March 20, 2013 11:04 AM
To: Lyle, David (Cloud Services)
Cc: openstack@lists.launchpad.net
Subject: Re: register a new panel in overrides.py


Thats not working for me.

My module is installed in 
/usr/lib/python2.7/dist-packages/horizon/dashboards/settings as 'ec2list', it 
is in the python path so thats not the issue.

overrides.py looks like this:

import horizon
import logging

settings= horizon.get_dashboard('settings')

LOG = logging.getLogger(__name__)

import ec2list 

try:
settings.register(ec2list)
except Exception as exc:
LOG.debug(Error registering ec2list panel: %s % exc)
-

I've also tried using ec2list.__class__, but then I get the following error:
Error registering ec2list panel: Only Panel classes or subclasses may be 
registered.

However, my ec2list Panel is a valid panel, as is evident by the fact that when 
I put it directly into the settings/dashboard.py file list of panels, it works 
just fine.  Here is the panel.py file:

--
from django.utils.translation import ugettext_lazy as _

import horizon
from horizon.dashboards.settings import dashboard

class EC2ListPanel(horizon.Panel):
name = _(EC2 List Credentials)
slug = 'ec2list'

dashboard.Settings.register(EC2ListPanel)
-






On Mar 20, 2013, at 12:34 PM, Lyle, David (Cloud Services) 
david.l...@hp.com wrote:

 There's a couple of changes that you need to make...
 
 First, edit the overrides.py file:  (e.g., if we wanted to add the panel to 
 the admin dashboard so this uses the admin dashboard slug: 'admin')
 
 import horizon
 from path_to_module.panel import YourNewPanelClass 
 
 admin_dashboard = horizon.get_dashboard(admin)
 admin_dashboard.register(YourNewPanelClass)
 
 
 Next, make sure your overrides.py file is being called in your settings.py
 
 HORIZON_CONFIG = {
   dashboards = ('project', 'admin','settings'),
   ...,
   'customization_module': 'your_base_module.overrides'
 }
 
 -Dave
 
 -Original Message-
 From: openstack-bounces+david.lyle=hp@lists.launchpad.net 
 [mailto:openstack-bounces+david.lyle=hp@lists.launchpad.net] On Behalf Of 
 Wyllys Ingersoll
 Sent: Wednesday, March 20, 2013 9:50 AM
 To: openstack@lists.launchpad.net
 Subject: [Openstack] register a new panel in overrides.py
 
 
 Can someone give a pointer to how one goes about adding a new panel to an 
 existing panel using overrides.py ?
 
 I know my panel is working because if I hardcode it into an existing 
 dashboard.py file, it is found and displayed.  I'd prefer to put it in 
 overrides.py instead and am wondering how that would be coded.
 
 thanks,
  Wyllys
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM guest can't access outside world.

2013-03-19 Thread David Kang

 I also have the same problem with Quantum.
I don't know how to resolve it.
But I saw the following in 
http://docs.openstack.org/trunk/openstack-network/admin/content/connectivity.html.

External network. Used to provide VMs with Internet access in some deployment 
scenarios.  The IP addresses on this network should be reachable by anyone on 
the Internet.

 It looks like that Quantum assumes Network Node should have public IP 
address (not 10.x.x.x address).
If Network Node has a public IP address, the routing is done once between a 
private network and a public network
on the Network Node before a packet reaches public network.
But if Network Node has again in a private network, then a packet from a VM 
should go through
two private network to reach public network.
It looks like that Quantum does not handle this multiple private network case.

 Does anybody have any idea/answer/correction?
I cannot put Network Node in public network.
I hope someone can have a solution to this problem.

 Thanks,
 David



- Original Message -
 Hi Jeff,
 Thanks for looking into this but the masquerade still not working. I
 have more
 information and hope you will be able to help.
 
 I have a single bare metal with everything installed ( Nova-compute,
 network
 node, controller, etc... )
 
 There four NIC on that box
 NIC em1 connect to external network with IP 10.38.5.251
 NIC em3 connect to internal network with no IP configured
 em2 and em4 are disabled
 
 After everything is configured ( adding router, net, sub-net ,etc.. )
 and
 running, I run ifconfig and found out em1's has no more ip but a
 bridge has
 created
 
 brq7f248f20-a6 Link encap:Ethernet HWaddr 00:21:9B:95:99:7A
 inet addr:10.38.15.251 Bcast:10.38.255.255 Mask:255.255.0.0
 
 em1 Link encap:Ethernet HWaddr 00:21:9B:95:99:7A
 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
 
 
 I think this is how the quantum/linuxbridge work.
 
 
 I also create a floatingIP range ( 10.38.17.1-254 ). Then I saw a
 virtual NIC
 is created with IP 10.38.17.1 which I believe is the router IP for the
 NAT
 
 qg-0503ddc6-1d Link encap:Ethernet HWaddr 8E:57:D6:DA:2B:AA
 inet addr:10.38.17.1 Bcast:10.38.17.255 Mask:255.255.255.0
 
 
 Now I run tcpdump on the openstack box ( ie 10.38.5.251 ) and the
 target machine
 ( 10.38.1.2 ). Then ping 10.38.1.2 from my VM ( 192.168.151.4 ). I saw
 the
 packet did arrive to 10.38.1.2 but with ip address 192.168.151.4. I
 supposed to
 see 10.38.17.1 right?
 
 20:52:43.492160 IP 192.168.151.4  10.38.1.2: ICMP echo request, id
 17665, seq
 5, length 64
 20:52:43.492170 IP 10.38.1.2  192.168.151.4: ICMP echo reply, id
 17665, seq 5,
 length 64
 20:52:44.492597 IP 192.168.151.4  10.38.1.2: ICMP echo request, id
 17665, seq
 6, length 64
 20:52:44.492608 IP 10.38.1.2  192.168.151.4: ICMP echo reply, id
 17665, seq 6,
 length 64
 20:52:45.492894 IP 192.168.151.4  10.38.1.2: ICMP echo request, id
 17665, seq
 7, length 64
 20:52:45.492906 IP 10.38.1.2  192.168.151.4: ICMP echo reply, id
 17665, seq 7,
 length 64
 20:52:46.493183 IP 192.168.151.4  10.38.1.2: ICMP echo request, id
 17665, seq
 8, length 64
 20:52:46.493193 IP 10.38.1.2  192.168.151.4: ICMP echo reply, id
 17665, seq 8,
 length 64
 
 
 I also think it is the IP masquerade rule, but it didn't work. I tried
 all
 three interface ( em1, brq7f248f20-a6 and qg-0503ddc6-1d ) but none of
 them
 work. For some reason SNAT didn't seem to happen..
 
 
 
 Here is the iptables status
 
 
 
 
 
 service iptables status
 Table: nat
 Chain PREROUTING (policy ACCEPT)
 num target prot opt source destination
 1 nova-compute-PREROUTING all -- 0.0.0.0/0 0.0.0.0/0
 2 quantum-l3-agent-PREROUTING all -- 0.0.0.0/0 0.0.0.0/0
 3 nova-api-PREROUTING all -- 0.0.0.0/0 0.0.0.0/0
 
 Chain POSTROUTING (policy ACCEPT)
 num target prot opt source destination
 1 nova-compute-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0
 2 quantum-l3-agent-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0
 3 quantum-postrouting-bottom all -- 0.0.0.0/0 0.0.0.0/0
 4 nova-api-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0
 5 nova-postrouting-bottom all -- 0.0.0.0/0 0.0.0.0/0
 6 MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0
 7 MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0
 8 MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0
 
 Chain OUTPUT (policy ACCEPT)
 num target prot opt source destination
 1 nova-compute-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0
 2 quantum-l3-agent-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0
 3 nova-api-OUTPUT all -- 0.0.0.0/0 0.0.0.0/0
 
 Chain nova-api-OUTPUT (1 references)
 num target prot opt source destination
 
 Chain nova-api-POSTROUTING (1 references)
 num target prot opt source destination
 
 Chain nova-api-PREROUTING (1 references)
 num target prot opt source destination
 
 Chain nova-api-float-snat (1 references)
 num target prot opt source destination
 
 Chain nova-api-snat (1 references)
 num target prot opt source destination
 1 nova-api-float-snat all -- 0.0.0.0/0 0.0.0.0/0
 
 Chain nova-compute-OUTPUT (1 references)
 num target prot opt source destination
 
 Chain

[Openstack] hostname change on hardware nodes

2013-03-14 Thread David Stearns
Hi all,
I'm trying to recover from a mass renaming of our hardware nodes and have
been having a bit of trouble.

After changing the hostname on all of them nova-manage service list show
all the old hostname (in the dead state) and all the new host names.  My
first question is how do I remove old services from this list. (eg, if the
hardware dies) and I just want to remove it from the list completely.

My second question is what else do I need to change to get the hostname
change to work correctly.  Right now I believe most operations on instances
are broken because the host entry on the table points to the old host
instead of the new one, so deleting a host will hang, etc.

Thanks
-David Stearns
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [openstack-dev] [Swift] Design note of geo-distributed Swift cluster

2013-03-05 Thread David Hadas
This discussion of geo-distributed Swift is of great interest to us as 
well. Yet, based on our analysis, the proposed ring of ring seem to not 
meet a basic requirement that we see.


One of the basic disadvantages (and advantages) of the consistent 
hashing at the core of the Swift Ring concept is that it takes control 
over the placement of objects. As long as one considers a fairly unified 
cluster - and does not care which object is placed where in that 
cluster, consistent hashing does a great job.


However, in the case of geo-distributed Swift, many customers do care 
and need control over the placement decision - hence making the use of 
consistent hashing to decide where an object should be placed will not 
do. We actually believe that placement decisions can be made in 
the resolution of containers - not individual objects. Hence, container 
sync seems like a reasonable starting point.


We plan to contribute improvements to container sync, making it a more 
attractive, scalable, and easier to use replication mechanism such that 
it can serve as a basis of a placement aware system controlling where 
replicas reside in a geo-distributed Swift. It would be great if the the 
community align on the need to offer control over placement, between geo 
distributed sites, but if this is not the case, we need to find a way 
to accommodate the different requirements without complicating the design.


Regards,
David Hadas

--
DH



Regards,
David Hadas
IBM Research Labs, Haifa

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift][keystone]: swift-init proxy start failed

2013-03-03 Thread Tao, Dao (David, ISS-MSL-SH)
Hi Pete,

I added the signing_dir to the authtoken config file and it works. Thanks a lot 
!

signing_dir = /tmp/keystone-signing-swif


Thanks, 
-David 

-Original Message-
From: Pete Zaitcev [mailto:zait...@redhat.com] 
Sent: 2013年3月4日 12:59 AM
To: Tao, Dao (David, ISS-MSL-SH)
Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
Subject: Re: [Openstack] [Swift][keystone]: swift-init proxy start failed

On Sat, 2 Mar 2013 04:56:38 +
Tao, Dao (David, ISS-MSL-SH) dao@hp.com wrote:

 OSError: [Errno 13] Permission denied: '/root/keystone-signing'

 [filter:authtoken]
 paste.filter_factory = keystone.middleware.auth_token:filter_factory

Add signing_dir to authtoken above. Swift should own it, although in practice 
it's not going to write to it. Something like /var/cache/swift will do.

-- Pete
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Are the Python APIs public or internal?

2013-03-01 Thread David Kranz
The Tempest (QA) team certainly considers them to be public and we just 
started getting some contributions that are testing novaclient. In other 
work I am also a consumer of several of these APIs so I really hope they 
don't break.


 -David

On 3/1/2013 8:50 AM, Dolph Mathews wrote:
I believe they should certainly be treated as public API's -- just 
like any other library. I'd also treat them as stable if they've ever 
been included in a versioned release. That said, I'm sure it would be 
easy to find examples of methods  attributes within the library that 
are not intended to be consumed externally, but perhaps either the 
naming convention or documentation doesn't sufficiently indicate that.


In keysoneclient, we're making backwards incompatible changes in a new 
subpackage (keystoneclient.v3) while maintaing compatibility in the 
common client code. For example, you should always be able to 
initialize the client with a tenant_id / tenant_name, even though the 
client will soon be using project_id / project_name internally to 
reflect our revised lingo.



-Dolph


On Thu, Feb 28, 2013 at 11:07 PM, Lorin Hochstein 
lo...@nimbisservices.com mailto:lo...@nimbisservices.com wrote:


Here's an issue that came up in the operators doc sprint this week.

Let's say I wanted to write some Python scripts using the APIs
exposed by the python-*client packages. As a concrete example,
let's say I wrote a script that uses the keystone Python API
that's exposed in the python-keystoneclient package:


https://github.com/lorin/openstack-ansible/blob/master/playbooks/keystone/files/keystone-init.py

Are these APIs public or stable  in some meaningful way?
(i.e., can I count on this script still working across minor
release upgrades)? Or should they be treated like internal APIs
that could be changed at any time in the future? Or is this not
defined at all?

Lorin


___
Mailing list: https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
Post to : openstack@lists.launchpad.net
mailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack][Swift][keystone]: swift-init proxy start failed

2013-03-01 Thread Tao, Dao (David, ISS-MSL-SH)
 ec2_extension user_crud_extension public_service

[pipeline:admin_api]
pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body 
json_body debug stats_reporting ec2_extension s3_extension crud_extension 
admin_service

[app:public_version_service]
paste.app_factory = keystone.service:public_version_app_factory

[app:admin_version_service]
paste.app_factory = keystone.service:admin_version_app_factory

[pipeline:public_version_api]
pipeline = stats_monitoring url_normalize xml_body public_version_service

[pipeline:admin_version_api]
pipeline = stats_monitoring url_normalize xml_body admin_version_service

[composite:main]
use = egg:Paste#urlmap
/v2.0 = public_api
/ = public_version_api

[composite:admin]
use = egg:Paste#urlmap
/v2.0 = admin_api
/ = admin_version_api


Thanks  Regards,
David

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Grizzly-3 development milestone available (Keystone, Glance, Nova, Horizon, Quantum, Cinder)

2013-02-22 Thread David Kranz
Now that we are at feature freeze, is there a description of 
incompatible configuration or api changes that happened since Folsom?
That is, a description of how deploying grizzly differs from deploying  
folsom.


 -David

On 2/22/2013 7:21 AM, Thierry Carrez wrote:

Martinx - ジェームズ wrote:

  What is the status of Openstack Grizzly-3 Ubuntu packages?

  Can we already set it up using apt-get / aptitude? With packaged Heat,
Ceilometer and etc?

  Which version is recommended to test Grizzly-3, Precise (via testing
UCA), Raring?

  Is Grizzly planed to be the default Openstack for Raring?

I suspect it will take a few days for grizzly-3 to appear in Ubuntu, as
the tarballs were cut a few hours ago. As far as I know, Grizzly is
indeed the planned default OpenStack for Raring.




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone middleware

2013-02-19 Thread David Chadwick

Hi Pat

do you expect the one central user store to be replicated, say in 
Keystone, or not replicated?


The approach we have taken is to assume that the user stores (we support 
multiple distributed ones) are external to Keystone, and will be managed 
by external administrators. When a user accesses OpenStack, a transient 
entry is created in Keystone's user database for the duration of the SSO 
token, and is then automatically removed afterwards. This does not 
effect role based access controls, but will effect ACLs that currently 
use user IDs to identify the user, since these will change for different 
login sessions. The solution is for the ACL to use a persistent identity 
attribute of the user which comes from the user store, rather than to 
use the transient Keystone user ID


regards

David

On 18/02/2013 16:16, pat wrote:

Hi David,

Well, it might be useful. I forget to add that I expect one (central) user 
store.

Thanks

  Pat

On Mon, 18 Feb 2013 16:11:05 +, David Chadwick wrote

Hi Pat

sounds like you need our federation software which was designed
specifically for this use case. We currently support SAML as the SSO
protocol, and have just added Keystone to Keystone SSO. I have also
written a blueprint to show how OAuthv2 and OpenConnect can be used
by writing custom plugin modules. So if you have your own
proprietary SSO protocol you can write plugin modules for this

Kristy can let you Pat have an alpha version for testing if he wants
it.

regards

David

On 18/02/2013 15:59, pat wrote:

Hello,

Sorry to disturb, but I have some questions regarding keystone middleware.

Some introduction to problem: I need to integrate OpenStack to our existing
infrastructure where all systems are integrated on REST and Web level using
SSO-like system (there's generated a token string with specific information).
Required behavior is to allow users log-in once in existing infrastructure and
without additional log-in access OpenStack components.

I assume this is possible by implementing custom keystone drivers for identity
and token. Is that correct?
Should I also implement new policy and/or catalog driver?

If this is possible I expect the keystone token is the token generated by my
middleware driver(s) and such token is used by all other OpenStack parts. Is
that correct?
Does this affect way how the OpenStack internally validates token? Now when
validating token the admin token has to be passed to validation request too. I
expect not.

Is there possible to chain more keystone authentication drivers? E.g. first
check my custom and if this one fails then check SQL one.

I've searched internet to find some example of keystone middleware, but I
didn't succeed :-\ Is there an example or step by step documentation
(something for an ... :-))? I've read Middleware Architecture documentation
and my questions are based on this.

Thanks a lot for your help.

   Pat



Freehosting PIPNI - http://www.pipni.cz/


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




Freehosting PIPNI - http://www.pipni.cz/




Freehosting PIPNI - http://www.pipni.cz/



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone middleware

2013-02-18 Thread David Chadwick

Hi Pat

sounds like you need our federation software which was designed 
specifically for this use case. We currently support SAML as the SSO 
protocol, and have just added Keystone to Keystone SSO. I have also 
written a blueprint to show how OAuthv2 and OpenConnect can be used by 
writing custom plugin modules. So if you have your own proprietary SSO 
protocol you can write plugin modules for this


Kristy can let you Pat have an alpha version for testing if he wants it.

regards

David


On 18/02/2013 15:59, pat wrote:

Hello,

Sorry to disturb, but I have some questions regarding keystone middleware.

Some introduction to problem: I need to integrate OpenStack to our existing
infrastructure where all systems are integrated on REST and Web level using
SSO-like system (there's generated a token string with specific information).
Required behavior is to allow users log-in once in existing infrastructure and
without additional log-in access OpenStack components.

I assume this is possible by implementing custom keystone drivers for identity
and token. Is that correct?
Should I also implement new policy and/or catalog driver?

If this is possible I expect the keystone token is the token generated by my
middleware driver(s) and such token is used by all other OpenStack parts. Is
that correct?
Does this affect way how the OpenStack internally validates token? Now when
validating token the admin token has to be passed to validation request too. I
expect not.

Is there possible to chain more keystone authentication drivers? E.g. first
check my custom and if this one fails then check SQL one.

I've searched internet to find some example of keystone middleware, but I
didn't succeed :-\ Is there an example or step by step documentation
(something for an ... :-))? I've read Middleware Architecture documentation
and my questions are based on this.

Thanks a lot for your help.

  Pat



Freehosting PIPNI - http://www.pipni.cz/


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] keystone delegate Athentication

2013-02-06 Thread David Chadwick
This is already available in a side branch of the Git hub in the 
federation code, written to support the following blueprint:


https://blueprints.launchpad.net/keystone/+spec/federation

We have a number of people already experimenting with the above code.

We have a newer version available in our labs which also supports the 
following blueprints:


https://blueprints.launchpad.net/keystone/+spec/role-mapping-service-keystone
https://blueprints.launchpad.net/keystone/+spec/adding-idps-to-service-catalog
https://blueprints.launchpad.net/keystone/+spec/mapping-distributed-admin

Let me know if you would like an alpha copy of the above for testing

regards

David


On 06/02/2013 14:54, Mballo Cherif wrote:

Hi everybody !

I am wondering if it’s possible to delegate keystone Authentication to
an Authentication against a  server (I have one Strong Authentication
server) or an Identity Provider?

If I make modification on keystoneclient code it may be possible?

Any ideas? Please help me!

Thanks !

Sherif!



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift + Keystone integration problem

2013-02-01 Thread David Goetz
Sounds like swift isn't listening on that port. What is the bind_port in your 
proxy-server.conf?


On Feb 1, 2013, at 12:53 PM, Andrey V. Romanchev wrote:

 Hello!
 I've installed swift + keystone and have incomprehensible problem
 
 First of all I get auth tokens
 curl -d '{auth: {tenantName: service, 
 passwordCredentials:{username: swift, password: swiftpass}}}' -H 
 Content-type: application/json http://host:5000/v2.0/tokens | python 
 -mjson.tool
 
 This command works fine, I get token id and publicURL
 
 Then
 curl -H X-AUTH-TOKEN: cf1d44080a184e6c8f94e3fe52e89d48 
 http://host:/v1/AUTH_b74d4d57b1f5473bb2d8ffe5110a3d5a
 
 This command just hangs and that's all. No swift logs, no response.
 If I restart proxy server, I get on client side
 curl: (56) Recv failure: Connection reset by peer
 
 I've completely stuck here. I even turned on debug logs in swift-proxy - no 
 result as well.
 Is there any possibility to understand what's wrong?
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift + Keystone integration problem

2013-02-01 Thread David Goetz
I'm sorry- I didn't read that part about the proxy restart :)  The proxy may 
not log if it gets hung up in some middleware. What middleware do you have 
running?  You can try adding in some log messages into the middleware you have 
running to find out where.

 
On Feb 1, 2013, at 1:37 PM, David Goetz wrote:

 Sounds like swift isn't listening on that port. What is the bind_port in your 
 proxy-server.conf?
 
 
 On Feb 1, 2013, at 12:53 PM, Andrey V. Romanchev wrote:
 
 Hello!
 I've installed swift + keystone and have incomprehensible problem
 
 First of all I get auth tokens
 curl -d '{auth: {tenantName: service, 
 passwordCredentials:{username: swift, password: swiftpass}}}' -H 
 Content-type: application/json http://host:5000/v2.0/tokens | python 
 -mjson.tool
 
 This command works fine, I get token id and publicURL
 
 Then
 curl -H X-AUTH-TOKEN: cf1d44080a184e6c8f94e3fe52e89d48 
 http://host:/v1/AUTH_b74d4d57b1f5473bb2d8ffe5110a3d5a
 
 This command just hangs and that's all. No swift logs, no response.
 If I restart proxy server, I get on client side
 curl: (56) Recv failure: Connection reset by peer
 
 I've completely stuck here. I even turned on debug logs in swift-proxy - no 
 result as well.
 Is there any possibility to understand what's wrong?
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Key injection failure on boot

2013-01-11 Thread David Kranz
Sometimes when I boot a bunch of vms seconds apart, using the key_name 
argument, some instance will not have its key injected.
I found a bug ticket marked won't fix with a comment from Vish that 
key injection was for developer convenience[1]. Of course
the  personality argument could also be used to inject the file. This is 
odd because key_name is a documented part of nova client, as the files
mechanism. So what is the recommended way to do what the key_name 
argument is documented to do?


I think if key_name is not intended to work it should be removed from 
nova client.


 -David


[1] https://bugs.launchpad.net/nova/+bug/967994

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Key injection failure on boot

2013-01-11 Thread David Kranz
Thanks Vish, but I am still a little confused. I am using an ubuntu 
precise cloudimg and normally when I pass a keyname to boot, the 
public key shows up in ~ubuntu/.ssh/authorized_keys.
Looking at the console log, I presume it is the guest cloud-init that is 
doing that. But sometimes not. This has to be a bug some where even if 
it is not in nova. There is a lot of mechanism here that I don't 
understand.  If there is documentation some where about exactly how to 
use metadata to install an ssh key I can't find it. Do you have any more 
advice?


 -David

On 1/11/2013 1:32 PM, Vishvananda Ishaya wrote:

Key name is the recommended method, but injecting it into the guest is not. The 
key should be downloaded from the metadata server using a guest process like 
cloud-init.

Vish

On Jan 11, 2013, at 10:20 AM, David Kranz david.kr...@qrclab.com wrote:


Sometimes when I boot a bunch of vms seconds apart, using the key_name 
argument, some instance will not have its key injected.
I found a bug ticket marked won't fix with a comment from Vish that key injection was 
for developer convenience[1]. Of course
the  personality argument could also be used to inject the file. This is odd 
because key_name is a documented part of nova client, as the files
mechanism. So what is the recommended way to do what the key_name argument is 
documented to do?

I think if key_name is not intended to work it should be removed from nova 
client.

-David


[1] https://bugs.launchpad.net/nova/+bug/967994

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Installing Dashboard standalone

2012-12-20 Thread David Busby
Hi Guillermo,

Would not modifying the local_settings.py and changing the OPENSTACK_HOST
to reference a node other than 127.0.0.1 resolve the issue?

Cheers

David





On Thu, Dec 20, 2012 at 1:49 AM, Guillermo Alvarado 
guillermoalvarad...@gmail.com wrote:

 BTW I am trying to use a my own version of the openstack-dashboard/
 horizon because I made some modifications to the GUI. My version is based
 in Essex release. Please anybody can help me with this?


 2012/12/19 Guillermo Alvarado guillermoalvarad...@gmail.com

 I Installed the openstack-dashboard but I have this error in the apache
 logs:

 ImproperlyConfigured: Error importing middleware horizon.middleware:
 cannot import name users



 2012/12/19 Guillermo Alvarado guillermoalvarad...@gmail.com

 Hi everyone,

 I want to install the openstack-dashboard/horizon standalone, I mean, I
 want to have a node for compute, a node for controller and a node for the
 dashboard. How can I achive this?

 Thanks in advance,
 Best Regards.




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread David Busby
Hi Andrew,

Is this for glance or nova ?

For nova change:

state_path = /var/lib/nova
lock_path = /var/lib/nova/tmp

in your nova.conf

For glance I'm unsure, may be easier to just mount gluster right onto
/var/lib/glance (similarly could do the same for /var/lib/nova).

And just my £0.02 I've had no end of problems getting gluster to play
nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs,
tried 2 replica N distribute setups with many a random glusterfs death), as
such I have opted for using ceph.

ceph's rados can also be used with cinder from the brief reading I've been
doing into it.


Cheers

David





On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.dewrote:

 Hi,

 If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can I
 control where openstack puts the disk files?

 Thanks,

 Andrew

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread David Busby
Hi Andrew,

An interesting idea, but I am unaware if nova supports storage affinity in
any way, it does support host affinity iirc, as a kludge you could have say
some nova compute nodes using your slow mount and reserve the fast
mount nodes as required, perhaps even defining separate zones for
deployment?

Cheers

David





On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.dewrote:

 Hi David,

 It is for nova.

 Im not sure I understand. I want to be able to say to openstack;
 openstack, please install this instance (A) on this mountpoint and please
 install this instance (B) on this other mountpoint. I am planning on
 having two NFS / Gluster based stores, a fast one and a slow one.

 I probably will not want to say please every time :)

 Thanks,

 Andrew

 On Dec 20, 2012, at 3:42 PM, David Busby wrote:

  Hi Andrew,
 
  Is this for glance or nova ?
 
  For nova change:
 
  state_path = /var/lib/nova
  lock_path = /var/lib/nova/tmp
 
  in your nova.conf
 
  For glance I'm unsure, may be easier to just mount gluster right onto
 /var/lib/glance (similarly could do the same for /var/lib/nova).
 
  And just my £0.02 I've had no end of problems getting gluster to play
 nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs,
 tried 2 replica N distribute setups with many a random glusterfs death), as
 such I have opted for using ceph.
 
  ceph's rados can also be used with cinder from the brief reading I've
 been doing into it.
 
 
  Cheers
 
  David
 
 
 
 
 
  On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de
 wrote:
  Hi,
 
  If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount can
 I control where openstack puts the disk files?
 
  Thanks,
 
  Andrew
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] two or more NFS / gluster mounts

2012-12-20 Thread David Busby
I may of course be entirely wrong :) which would be cool if this
is achievable / on the roadmap.

At the very least if this is not already in discussion I'd raise it on
launchpad as a potential feature.




On Thu, Dec 20, 2012 at 3:19 PM, Andrew Holway a.hol...@syseleven.dewrote:

 Ah shame. You can specify different storage domains in oVirt.

 On Dec 20, 2012, at 4:16 PM, David Busby wrote:

  Hi Andrew,
 
  An interesting idea, but I am unaware if nova supports storage affinity
 in any way, it does support host affinity iirc, as a kludge you could have
 say some nova compute nodes using your slow mount and reserve the fast
 mount nodes as required, perhaps even defining separate zones for
 deployment?
 
  Cheers
 
  David
 
 
 
 
 
  On Thu, Dec 20, 2012 at 2:53 PM, Andrew Holway a.hol...@syseleven.de
 wrote:
  Hi David,
 
  It is for nova.
 
  Im not sure I understand. I want to be able to say to openstack;
 openstack, please install this instance (A) on this mountpoint and please
 install this instance (B) on this other mountpoint. I am planning on
 having two NFS / Gluster based stores, a fast one and a slow one.
 
  I probably will not want to say please every time :)
 
  Thanks,
 
  Andrew
 
  On Dec 20, 2012, at 3:42 PM, David Busby wrote:
 
   Hi Andrew,
  
   Is this for glance or nova ?
  
   For nova change:
  
   state_path = /var/lib/nova
   lock_path = /var/lib/nova/tmp
  
   in your nova.conf
  
   For glance I'm unsure, may be easier to just mount gluster right onto
 /var/lib/glance (similarly could do the same for /var/lib/nova).
  
   And just my £0.02 I've had no end of problems getting gluster to play
 nice on small POC clusters (3 - 5 nodes, I've tried nfs tried glusterfs,
 tried 2 replica N distribute setups with many a random glusterfs death), as
 such I have opted for using ceph.
  
   ceph's rados can also be used with cinder from the brief reading I've
 been doing into it.
  
  
   Cheers
  
   David
  
  
  
  
  
   On Thu, Dec 20, 2012 at 1:53 PM, Andrew Holway a.hol...@syseleven.de
 wrote:
   Hi,
  
   If I have /nfs1mount and /nfs2mount or /nfs1mount and /glustermount
 can I control where openstack puts the disk files?
  
   Thanks,
  
   Andrew
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
 
 
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [swift] RAID Performance Issue

2012-12-19 Thread David Busby
Hi Zang,

As JuanFra points out there's not much sense in using Swift on top of raid
as swift handel; extending on this RAID introduces a write penalty (
http://theithollow.com/2012/03/21/understanding-raid-penalty/) this in turn
leads to performance issues, refer the link for write penalty's per
configuration.

As I recall (though this was from way back in October 2010) the suggested
method of deploying swift is onto standalone XFS drives, leaving swift to
handel the replication and distribution.


Cheers

David






On Wed, Dec 19, 2012 at 9:12 AM, JuanFra Rodriguez Cardoso 
juanfra.rodriguez.card...@gmail.com wrote:

 Hi Zang:

 Basically, it makes no sense to use Swift on top of RAID because Swift
 just delivers replication schema.

 Regards,
 JuanFra.

 2012/12/19 Hua ZZ Zhang zhu...@cn.ibm.com

 Hi,

 I have read the admin document of Swift and find there's recommendation
 of not using RAID 5 or 6 because swift performance degrades quickly with it.
 Can anyone explain why this could happen? If the RAID is done by hardware
 RAID controller, will the performance issue still exist?
 Anyone can share such kind of experience of using RAID with Swift?
 Appreciated for any suggestion from you.

 -Zhang Hua

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens representing authorization to projects/tenants in the Keystone V3 API

2012-11-13 Thread David Chadwick

Hi Adam

you have pointed out an important difference between an unscoped token 
and a scoped one. The former can only be used with keystone, the latter 
with a cloud service. This also implies that a scoped token can only 
have the scope of a single service, and not multiple services. The user 
must swap the unscoped token for a set of scoped tokens if he wishes to 
access a set of cloud services.


This model is clean and consistent.

Concerning your attack scenario, then the best point of attack is either 
the client (steal his token(s)) or Keystone (get access to any service)


regards

David

On 13/11/2012 14:38, Adam Young wrote:

On 11/10/2012 10:58 AM, David Chadwick wrote:

I agree with the vast majority of what Jorge says below. The idea I
would like to bounce around is that of the unscoped token.

What does it mean conceptually? What is its purpose? Why do we need
it? Why should a user be given an unscoped token to exchange at a
later time for a scoped token?

My view is as follows:
i) a user is authenticated and identified, and from this, keystone can
see that the user has access to a number of different tenants and
services. Keystone creates an unscoped token to encapsulate this. Note
that the unscoped token is scoped to the services/tenants available to
this user, and consequently it is different for each identified user.
Thus it does have some scope i.e. it cannot be swapped for access to
any service by any tenant.
ii) the user must choose which service/tenant he wishes to activate.
This is in line with the principle of least privileges.
iii) the user informs keystone which service(s) and tenant(s) he
wishes to access and Keystone swaps the unscoped token for one that is
scoped to the choice of the user.

The issue then becomes, what is the allowable scope of a scoped token?
Jorge below believes it should cover multiple
services/endpoints/tenants. So one must then ask, what is the
difference between the most widely scoped scoped-token and the
unscoped token? Surely they will have the same scope won't they? In
which case there is no need for both concepts.


let's compare with Kerberos:  In my view an unscoped token is
comparaable with a ticket granting ticket:  it cannot be used with any
service other than the KDC, and it can only be used to get service
tickets. A service ticket can only be used with a specific service.  If
that service gets compromised, any tickets it has are useless for access
to other resources.


If an unscoped token can be used against a wide array of services, we
have just provided a path for an elevation of privileges attack. If I
know that a service consumes tokens which can be used on a wide number
of other services, I can target my attacks against that service in order
to get access everywhere.

If we are going to provide this functionality, it should be turned off
by default.



Comments please

regards

David

On 23/10/2012 06:25, Jorge Williams wrote:

Here's my view:

On making the default token a configuration option:  Like the idea.
  Disabling the option by default.  That's fine too.

On scoping a token to a specific endpoint:  That's fine, though I
believe that that's in the API today.  Currently, the way that we scope
tokens to endpoints is by validating against the service catalog. I'm
not sure if the default middleware checks for this yet, but the Repose
middleware does.  If you try to use a token in an endpoint that's not in
the service catalog the request fails -- well, if the check is turned
on.

Obviously, I'd like the idea of scoping a single token to multiple
tenants / endpoints.

I don't like the idea of calling tokens sloppy tokens -- it's
confusing.   All you have to say is that a token has a scope -- and the
scope of the token is the set of resources that the token can provide
access to.  You can limit the scope of a token to a tenant, to a
endpoint, to a set of endpoints or tenants etc -- what limits you place
on the scope of an individual token should be up to the operator.

Keep in mind that as we start digging into delegation and fine grained
authorization (after Grizzly, I'm sure), we'll end up with tokens that
have a scope of a subset of resources in a single or multiple tenants.
  So calling them sloppy now is just confusing.  Simply stating that a
token has a scope (as I've defined above) should suffice.  This is part
of the reason why I've never liked the term unscoped token, because an
unscoped token does have a scope. It just so happens that the scope of
that token is the resource that provides a list of available tenants.

-jOrGe W.

On Oct 22, 2012, at 9:57 PM, Adam Young wrote:


Are you guys +1 ing the original Idea, my suggestion to make it
optional, the fact that I think we should call these sloppy tokens?

On 10/22/2012 03:40 PM, Jorge Williams wrote:

+1 here too.

At the end of the day, we'd like the identity API to be flexible
enough to allow the token to be scoped in a manner that the deployer
sees fit.  What

Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens representing authorization to projects/tenants in the Keystone V3 API

2012-11-13 Thread David Chadwick

Hi Guang


On 13/11/2012 16:14, Yee, Guang wrote:

An unscoped token is basically implicitly scoped to Keystone service right?
One should be able to use an unscoped token to reset his password, and ask
Keystone for information pertaining to himself, such as what are his roles,
what services/endpoints are available to him, and what are his tenants, etc.
This is helpful for administration UIs such as MC.


agreed



There's a blueprint to address the need to scope the token down to the
service or endpoint level. Basically, service and endpoint isolation.


I have read your blueprint and I have some comments/questions on it. How 
do you want these to be addressed? By email, or by edits to you blueprint?


regards

David



https://blueprints.launchpad.net/keystone/+spec/service-isolation-and-roles-
delegation
http://wiki.openstack.org/Keystone/Service-Isolation-And-Roles-Delegation

It also addresses the intricacies of role delegation, which should be very
beneficial for cloud services.



Guang



-Original Message-
From: openstack-bounces+guang.yee=hp@lists.launchpad.net
[mailto:openstack-bounces+guang.yee=hp@lists.launchpad.net] On Behalf Of
David Chadwick
Sent: Tuesday, November 13, 2012 7:32 AM
To: Adam Young
Cc: OpenStack Development Mailing List; openstack@lists.launchpad.net
Subject: Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens representing
authorization to projects/tenants in the Keystone V3 API

Hi Adam

you have pointed out an important difference between an unscoped token
and a scoped one. The former can only be used with keystone, the latter
with a cloud service. This also implies that a scoped token can only
have the scope of a single service, and not multiple services. The user
must swap the unscoped token for a set of scoped tokens if he wishes to
access a set of cloud services.

This model is clean and consistent.

Concerning your attack scenario, then the best point of attack is either
the client (steal his token(s)) or Keystone (get access to any service)

regards

David

On 13/11/2012 14:38, Adam Young wrote:

On 11/10/2012 10:58 AM, David Chadwick wrote:

I agree with the vast majority of what Jorge says below. The idea I
would like to bounce around is that of the unscoped token.

What does it mean conceptually? What is its purpose? Why do we need
it? Why should a user be given an unscoped token to exchange at a
later time for a scoped token?

My view is as follows:
i) a user is authenticated and identified, and from this, keystone can
see that the user has access to a number of different tenants and
services. Keystone creates an unscoped token to encapsulate this. Note
that the unscoped token is scoped to the services/tenants available to
this user, and consequently it is different for each identified user.
Thus it does have some scope i.e. it cannot be swapped for access to
any service by any tenant.
ii) the user must choose which service/tenant he wishes to activate.
This is in line with the principle of least privileges.
iii) the user informs keystone which service(s) and tenant(s) he
wishes to access and Keystone swaps the unscoped token for one that is
scoped to the choice of the user.

The issue then becomes, what is the allowable scope of a scoped token?
Jorge below believes it should cover multiple
services/endpoints/tenants. So one must then ask, what is the
difference between the most widely scoped scoped-token and the
unscoped token? Surely they will have the same scope won't they? In
which case there is no need for both concepts.


let's compare with Kerberos:  In my view an unscoped token is
comparaable with a ticket granting ticket:  it cannot be used with any
service other than the KDC, and it can only be used to get service
tickets. A service ticket can only be used with a specific service.  If
that service gets compromised, any tickets it has are useless for access
to other resources.


If an unscoped token can be used against a wide array of services, we
have just provided a path for an elevation of privileges attack. If I
know that a service consumes tokens which can be used on a wide number
of other services, I can target my attacks against that service in order
to get access everywhere.

If we are going to provide this functionality, it should be turned off
by default.



Comments please

regards

David

On 23/10/2012 06:25, Jorge Williams wrote:

Here's my view:

On making the default token a configuration option:  Like the idea.
   Disabling the option by default.  That's fine too.

On scoping a token to a specific endpoint:  That's fine, though I
believe that that's in the API today.  Currently, the way that we scope
tokens to endpoints is by validating against the service catalog. I'm
not sure if the default middleware checks for this yet, but the Repose
middleware does.  If you try to use a token in an endpoint that's not in
the service catalog the request fails -- well, if the check is turned
on.

Obviously, I'd like the idea

Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens representing authorization to projects/tenants in the Keystone V3 API

2012-11-13 Thread David Chadwick

seems like we need a clear design for next generation tokens that
everyone can agree on. But also an extensible design to cater for 
outliers. In our federation design doc we show the Token Issuing Service 
and Token Validation Service as plugin modules to Keystone that can be 
replaces so that outliers can replace the standard service with one of 
their own choosing


regards

David


On 13/11/2012 17:35, heckj wrote:

So maintaining a token scoped to just the user, and a mechanism to
scope it to a tenant sound like all goodness. We can absolutely keep
the API such that it can provide either.

Right now, our auth_token middleware implicitly requires a tenant in
that scoping to work. If someone wanted to support a token scoped to
just a user for the services, they'd need a different middleware
there. Keystone as a service *doesn't* use the auth_token middleware,
so with the V3 API we can make it provide services appropriately
based on a token scoped only to the user.

All that in place, allow a token to be indeterminate scoped to
multiple tenants is fraught with security flaws, and if we continue
to provide unscoped tokens, that should obviate the need for token
scoped to multiple tenants.

- joe


On Nov 13, 2012, at 9:17 AM, David Chadwick d.w.chadw...@kent.ac.uk
wrote:

Hi Guang

On 13/11/2012 16:14, Yee, Guang wrote:

An unscoped token is basically implicitly scoped to Keystone
service right? One should be able to use an unscoped token to
reset his password, and ask Keystone for information pertaining
to himself, such as what are his roles, what services/endpoints
are available to him, and what are his tenants, etc. This is
helpful for administration UIs such as MC.


agreed


There's a blueprint to address the need to scope the token down
to the service or endpoint level. Basically, service and endpoint
isolation.


I have read your blueprint and I have some comments/questions on
it. How do you want these to be addressed? By email, or by edits to
you blueprint?

regards

David



https://blueprints.launchpad.net/keystone/+spec/service-isolation-and-roles-



delegation

http://wiki.openstack.org/Keystone/Service-Isolation-And-Roles-Delegation




It also addresses the intricacies of role delegation, which should be very

beneficial for cloud services.



Guang



-Original Message- From:
openstack-bounces+guang.yee=hp@lists.launchpad.net
[mailto:openstack-bounces+guang.yee=hp@lists.launchpad.net]
On Behalf Of David Chadwick Sent: Tuesday, November 13, 2012 7:32
AM To: Adam Young Cc: OpenStack Development Mailing List;
openstack@lists.launchpad.net Subject: Re: [Openstack]
[openstack-dev] Fwd: [keystone] Tokens representing authorization
to projects/tenants in the Keystone V3 API

Hi Adam

you have pointed out an important difference between an unscoped
token and a scoped one. The former can only be used with
keystone, the latter with a cloud service. This also implies that
a scoped token can only have the scope of a single service, and
not multiple services. The user must swap the unscoped token for
a set of scoped tokens if he wishes to access a set of cloud
services.

This model is clean and consistent.

Concerning your attack scenario, then the best point of attack is
either the client (steal his token(s)) or Keystone (get access to
any service)

regards

David

On 13/11/2012 14:38, Adam Young wrote:

On 11/10/2012 10:58 AM, David Chadwick wrote:

I agree with the vast majority of what Jorge says below. The
idea I would like to bounce around is that of the unscoped
token.

What does it mean conceptually? What is its purpose? Why do
we need it? Why should a user be given an unscoped token to
exchange at a later time for a scoped token?

My view is as follows: i) a user is authenticated and
identified, and from this, keystone can see that the user has
access to a number of different tenants and services.
Keystone creates an unscoped token to encapsulate this. Note
that the unscoped token is scoped to the services/tenants
available to this user, and consequently it is different for
each identified user. Thus it does have some scope i.e. it
cannot be swapped for access to any service by any tenant.
ii) the user must choose which service/tenant he wishes to
activate. This is in line with the principle of least
privileges. iii) the user informs keystone which service(s)
and tenant(s) he wishes to access and Keystone swaps the
unscoped token for one that is scoped to the choice of the
user.

The issue then becomes, what is the allowable scope of a
scoped token? Jorge below believes it should cover multiple
services/endpoints/tenants. So one must then ask, what is
the difference between the most widely scoped scoped-token
and the unscoped token? Surely they will have the same scope
won't they? In which case there is no need for both
concepts.


let's compare with Kerberos:  In my view an unscoped token is
comparaable with a ticket granting ticket:  it cannot be used
with any service other

Re: [Openstack] [Tempest] unable to run subset of tests via nosetests

2012-11-11 Thread David Kranz
ServerActionsTestBase is not the test class. You have to use 
ServerActionsTestJSON (or XML).
Look at the bottom of 
https://github.com/openstack/tempest/blob/master/tempest/tests/compute/servers/test_server_actions.py


 -David

On 11/9/2012 8:04 PM, Stef T wrote:


Hey Ravi,
Cool, and how do you run (say) only test_reboot_server_hard from 
ServerActionsTestBase ?


Whenever I try, I get;

stack@DevStack:~/tempest$ nosetests -sv 
tempest.tests.compute.servers.test_server_actions.py:ServerActionsTestBase.test_reboot_server_hard

The server should be power cycled ... ERROR

==
ERROR: The server should be power cycled
--
Traceback (most recent call last):
  File /usr/lib/pymodules/python2.7/nose/case.py, line 371, in setUp
try_run(self.inst, ('setup', 'setUp'))
  File /usr/lib/pymodules/python2.7/nose/util.py, line 478, in try_run
return func()
  File 
/opt/stack/tempest/tempest/tests/compute/servers/test_server_actions.py, 
line 39, in setUp

resp, server = self.client.create_server(self.name,
AttributeError: 'ServerActionsTestBase' object has no attribute 'client'
  begin captured logging  
tempest.config: INFO: Using tempest config file 
/opt/stack/tempest/etc/tempest.conf

tempest.tests.compute: DEBUG: Entering tempest.tests.compute.setup_package
-  end captured logging  -


On 11/09/2012 07:16 PM, Venkatesan, Ravikumar wrote:


Test_server_actions.py

~/openstack_projects/tempest$ nosetests -sv 
tempest/tests/compute/servers/test_server_actions.py


The server's password should be set to the provided password ... 
SKIP: Change password not available.


Negative Test: The server reboot on non existent server should return 
... ok


The server should be power cycled ... ok

The server should be signaled to reboot gracefully ... SKIP: Until 
bug 1014647 is dealt with.


Negative test: The server rebuild for a non existing server should 
not ... ok


The server should be rebuilt using the provided image and data ... ok

The server's RAM and disk space should be modified to that of ... 
SKIP: Resize not available.


The server's RAM and disk space should return to its original ... 
SKIP: Resize not available.


The server's password should be set to the provided password ... 
SKIP: Change password not available.


Negative Test: The server reboot on non existent server should return 
... ok


The server should be power cycled ... ok

The server should be signaled to reboot gracefully ... SKIP: Until 
bug 1014647 is dealt with.


Negative test: The server rebuild for a non existing server should 
not ... ok


The server should be rebuilt using the provided image and data ... ok

The server's RAM and disk space should be modified to that of ... 
SKIP: Resize not available.


The server's RAM and disk space should return to its original ... 
SKIP: Resize not available.


--

Ran 16 tests in 127.553s

OK (SKIP=8)

Regards,

Ravi

*From:*openstack-bounces+ravikumar.venkatesan=hp@lists.launchpad.net 
[mailto:openstack-bounces+ravikumar.venkatesan=hp@lists.launchpad.net] 
*On Behalf Of *Venkatesan, Ravikumar

*Sent:* Friday, November 09, 2012 4:13 PM
*To:* Stef T; openstack@lists.launchpad.net
*Subject:* Re: [Openstack] [Tempest] unable to run subset of tests 
via nosetests


To run a single test from Tempest:

~/openstack_projects/tempest$ nosetests -sv 
tempest/tests/compute/flavors/test_flavors.py


The expected flavor details should be returned ... ok

Ensure 404 returned for non-existant flavor ID ... ok

flavor details are not returned for non existant flavors ... ok

List of all flavors should contain the expected flavor ... ok

The detailed list of flavors should be filtered by disk space ... ok

The detailed list of flavors should be filtered by RAM ... ok

Only the expected number of flavors (detailed) should be returned ... ok

The list of flavors should start from the provided marker ... ok

The list of flavors should be filtered by disk space ... ok

The list of flavors should be filtered by RAM ... ok

Only the expected number of flavors should be returned ... ok

The list of flavors should start from the provided marker ... ok

Detailed list of all flavors should contain the expected flavor ... ok

The expected flavor details should be returned ... ok

Ensure 404 returned for non-existant flavor ID ... ok

flavor details are not returned for non existant flavors ... ok

List of all flavors should contain the expected flavor ... ok

The detailed list of flavors should be filtered by disk space ... ok

The detailed list of flavors should be filtered by RAM ... ok

Only the expected number of flavors (detailed) should be returned ... ok

The list of flavors should start from the provided marker

Re: [Openstack] [openstack-dev] Fwd: [keystone] Tokens representing authorization to projects/tenants in the Keystone V3 API

2012-11-10 Thread David Chadwick
I agree with the vast majority of what Jorge says below. The idea I 
would like to bounce around is that of the unscoped token.


What does it mean conceptually? What is its purpose? Why do we need it? 
Why should a user be given an unscoped token to exchange at a later time 
for a scoped token?


My view is as follows:
i) a user is authenticated and identified, and from this, keystone can 
see that the user has access to a number of different tenants and 
services. Keystone creates an unscoped token to encapsulate this. Note 
that the unscoped token is scoped to the services/tenants available to 
this user, and consequently it is different for each identified user. 
Thus it does have some scope i.e. it cannot be swapped for access to any 
service by any tenant.
ii) the user must choose which service/tenant he wishes to activate. 
This is in line with the principle of least privileges.
iii) the user informs keystone which service(s) and tenant(s) he wishes 
to access and Keystone swaps the unscoped token for one that is scoped 
to the choice of the user.


The issue then becomes, what is the allowable scope of a scoped token? 
Jorge below believes it should cover multiple 
services/endpoints/tenants. So one must then ask, what is the difference 
between the most widely scoped scoped-token and the unscoped token? 
Surely they will have the same scope won't they? In which case there is 
no need for both concepts.


Comments please

regards

David

On 23/10/2012 06:25, Jorge Williams wrote:

Here's my view:

On making the default token a configuration option:  Like the idea.
  Disabling the option by default.  That's fine too.

On scoping a token to a specific endpoint:  That's fine, though I
believe that that's in the API today.  Currently, the way that we scope
tokens to endpoints is by validating against the service catalog. I'm
not sure if the default middleware checks for this yet, but the Repose
middleware does.  If you try to use a token in an endpoint that's not in
the service catalog the request fails -- well, if the check is turned on.

Obviously, I'd like the idea of scoping a single token to multiple
tenants / endpoints.

I don't like the idea of calling tokens sloppy tokens -- it's
confusing.   All you have to say is that a token has a scope -- and the
scope of the token is the set of resources that the token can provide
access to.  You can limit the scope of a token to a tenant, to a
endpoint, to a set of endpoints or tenants etc -- what limits you place
on the scope of an individual token should be up to the operator.

Keep in mind that as we start digging into delegation and fine grained
authorization (after Grizzly, I'm sure), we'll end up with tokens that
have a scope of a subset of resources in a single or multiple tenants.
  So calling them sloppy now is just confusing.  Simply stating that a
token has a scope (as I've defined above) should suffice.  This is part
of the reason why I've never liked the term unscoped token, because an
unscoped token does have a scope. It just so happens that the scope of
that token is the resource that provides a list of available tenants.

-jOrGe W.

On Oct 22, 2012, at 9:57 PM, Adam Young wrote:


Are you guys +1 ing the original Idea, my suggestion to make it
optional, the fact that I think we should call these sloppy tokens?

On 10/22/2012 03:40 PM, Jorge Williams wrote:

+1 here too.

At the end of the day, we'd like the identity API to be flexible
enough to allow the token to be scoped in a manner that the deployer
sees fit.  What the keystone implementation does by default is a
different matter -- and disabling multiple tenant  scope by default
would be fine by me.

-jOrGe W.


On Oct 21, 2012, at 11:10 AM, Joe Savak wrote:


+1. ;)

So the issue is that the v2 API contract allows a token to be scoped
to multiple tenants. For v3, I'd like to have the same flexibility.
I don't see security issues, as if a token were to be sniffed you
can change the password of the account using it and use those creds
to scope tokens to any tenant you wish.

Scope should always be kept as limited as possible. Personally, I
don't feel like limiting the tenant list makes much difference.  THe
more I think about it, the real benefit comes from limiting the
endpoints.






On Oct 20, 2012, at 21:07, Adam Young ayo...@redhat.com
mailto:ayo...@redhat.com wrote:


On 10/20/2012 01:50 PM, heckj wrote:

I sent this to the openstack-dev list, and thought I'd double post
this onto the openstack list at Launchpad for additional feedback.

-joe

Begin forwarded message:

*From: *heckj he...@mac.com mailto:he...@mac.com
*Subject: **[openstack-dev] [keystone] Tokens representing
authorization to projects/tenants in the Keystone V3 API*
*Date: *October 19, 2012 1:51:16 PM PDT
*To: *OpenStack Development Mailing List
openstack-...@lists.openstack.org
mailto:openstack-...@lists.openstack.org
*Reply-To: *OpenStack Development Mailing List
openstack-...@lists.openstack.org

Re: [Openstack] how to use extra_specs??

2012-11-06 Thread David Kang


 Victor,

 You raised a very good point.
If you want to use any existing flag, what Razique said is OK.
But if you want to add new key value pairs, I don't think the current 
nova-compute can do that.
Especially if you are using the currnent nova/virt/libvirt as compute driver, 
you cannot.

 In our bare-metal provisioning effort, our compute driver nova/virt/bm defines 
a new flag
called instance_type_extra_specs in /etc/nova/nova.conf.
Any key value pairs can be specified there.
For example, for TileEmpower board we declare the flag as follows:

instance_type_extra_specs=cpu_arch:tilepro64

 Our bare-metal provisioning code is still in review.

 David

- Original Message -
 Yes sure,
 
 
 
 But in this example you use an existing key “free_ram_rmb”?! Ist that
 right?
 
 
 
 What I want to do is to create my own key value paris for compute
 nodes (who can I do that) and for the extra_specs variable of the
 flavors
 
 
 
 Best Regards
 
 
 
 Viktor
 
 
 
 
 
 From: i Mahroua [mailto:razique.mahr...@gmail.com]
 Sent: Tuesday, November 06, 2012 11:49 AM
 To: Mauch, Viktor (SCC)
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] how to use extra_specs??
 
 
 
 You can also use the cli, does the job too
 
 
 
 
 
 nova boot --flavor 1 --image 1f3fbdde-4c8a-4b3b-9cf1-a3b9fd0f1d9e
 --key_name key1 --hint query='[=, free_ram_mb, 1024]' vm1
 
 
 
 
 
 Nuage  Co - Razique Mahroua
 
 
 razique.mahr...@gmail.com
 
 
 
 
 
 
 
 
 Le 6 nov. 2012 à 11:46, Mauch, Viktor (SCC)  ma...@kit.edu  a
 écrit :
 
 
 
 
 
 
 
 Or being more specific,
 
 
 
 
 
 How can a add key/value pairs to a compute node???
 
 
 So I can check the extra_specs data with the key/value paris of the
 compute node?
 
 
 
 
 
 Cheers Viktor
 
 
 
 
 
 
 
 From: openstack-bounces+mauch=kit@lists.launchpad.net
 [mailto:openstack- bounces+mauch=kit@lists.launchpad.net ] On
 Behalf Of Mauch, Viktor (SCC)
 Sent: Tuesday, November 06, 2012 11:37 AM
 To: openstack@lists.launchpad.net
 Subject: Re: [Openstack] how to use extra_specs??
 
 
 
 
 
 Just one more noob question.
 
 
 
 
 
 Is it normal that If I set a set key/value pair let’s say
 {“my_special_key”:”my_special_value”} to an existing flavor, that the
 scheduler failed to find a host for an instance of this flavor.
 
 
 
 
 
 (I use devstack with folsom stable code)
 
 
 
 
 
 Cheers Viktor
 
 
 
 
 
 
 
 
 From: Vinay Bannai [ mailto:vban...@gmail.com ]
 Sent: Tuesday, November 06, 2012 5:09 AM
 To: Mauch, Viktor (SCC)
 Cc: openstack@lists.launchpad.net
 Subject: Re: [Openstack] how to use extra_specs??
 
 
 
 
 
 The simplest way would be to create key/value pairs for flavor types
 (instance types).
 
 
 
 This information would be associated in a separate table in nova db
 (instance_type_extra_specs) and would go along with the instance type.
 
 
 
 
 
 
 
 Once it is in the database, you can use this information to customize
 all kinds of things like the nova scheduler, additional data that can
 be passed to the instance at the time of the creation. This is the
 high level overview. If you search the mailing list archives you will
 find some additional discussion about this topic.
 
 
 
 
 
 
 Vinay
 
 
 
 On Mon, Nov 5, 2012 at 5:57 PM, Mauch, Viktor (SCC)  ma...@kit.edu 
 wrote:
 
 
 
 Hi guys,
 
 
 
 
 
 can anyone tell me (with an example) how to use the extra_specs
 variable for an instance_type??
 
 
 
 
 
 Best Regards
 
 
 
 
 
 Viktor
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 
 
 
 
 
 
 
 
 
 --
 Vinay Bannai
 Email: vban...@gmail.com
 Google Voice: 415 938 7576
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-qa-team] Need to change mailing list server?

2012-11-01 Thread David Kranz
There is now a full tempest run going daily and reporting failures to 
this list. But that won't work because
jenkins and gerrit cannot be launchpad members. According to the ci 
folks, others have dealt with this
by moving their mailing lists to lists.openstack.org. Perhaps we should 
do the same? We need to do

something in any event.

 -David

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [keystone] Domain Name Spaces

2012-10-30 Thread David Chadwick

On 27/10/2012 00:17, Henry Nash wrote:

So to pick up on a couple of the areas of contention:

a) Roles.  I agree that role names must stay globally unique.  One way
of thinking about this is that it is not actually keystone that is
creating the role name space it is the other services (Nova etc.) by
specifying roles in their policy files.  Until those services support
domain specific segmentation, then role names stay global.


I addressed this issue in my Federation design doc (in Appendix 2). Here 
is the text to save you having to look it up (note that an attribute is 
simply a generalisation of role and is needed in the broader authz 
context. Roles are too limiting.)


Attributes may be globally defined, e.g. visa attributes, or locally 
defined e.g. member of club X. Globally defined attributes are often 
specified in international standards and may be used in several 
different domains and federations. Their syntax and semantics are fixed, 
regardless of which Attribute Authority (AA) issues them. Local 
attributes are defined by their issuing attribute authority and usually 
are only valid in the domain or federation in which the AA is a member. 
For locally identifiable attributes the attribute authority (issuer) 
must be globally identifiable (in the federation). The attribute then 
becomes globally identifiable through hierarchical naming (AA.attribute).


Whilst in a non-federated world the service provider (e.g. Swift) can 
unilaterally define the roles it wants, in a federated world the 
attributes have to be mutually agreed between the issuer (AA) and the 
consumer (e.g. Swift).


To address this issue I proposed a role mapping (attribute mapping) 
service that is run by Keystone, and it maps between the role/attribute 
required by the service, and the actual attribute issued by the AA. For 
example, say Swift requires the role of Admin to be assigned to 
addministrators, whereas company X, the attribute authority, assigns the 
LDAP attribute title=OpenStack Cloud Administrator to its admin staff. 
Keystone will use its attribute mapping service to map between these values.




b) Will multi-domains make it more complicated in terms of authorisation
- e.g. will the users have to input a Domain Name into Horizon the whole
time?  The first thing I would say is that if the cloud administrator
has create multiple domains, then the keystone API should indeed require
the domain specification.


Again, in our federated design document we have the concept of a realm, 
which is similar to that of a domain, only in the federated case it 
indicates the place where the user will be authenticated and obtain 
(some of) his authz attributes from. The user can indicate the 
realm/domain name on the command line, but if it is missing, Keystone 
replies with a list of domains that it knows about and asks the user to 
choose one from the list.


 However, that should not mean it should be

laborious for a Horizon user.  In the case where a Cloud Provider has
created domains to encapsulate each of their customers - then if they
want to let those customer use horizon as the UI, then I would think
they want to be able to give each customer a unique URL which will point
to a Horizon that knows which domain to go to.


this is certainly a possibility.

regards

David

  Maybe the url contains

the Domain Name or ID in the path, and Horizon pulls this out of its own
url (assuming that's possible) and hence the user is never given an
option to chose a domain.  A Cloud Admin would use a non domain
qualified url to get to Horizon (basically as it is now) and hence be
able to see the different domains.  Likewise, in the case of where the
Cloud Provider has not chosen to create any individual domains (and is
just running the cloud in the default domain), then the  non domain
qualified url would be used to a Horizon that only showed one, default
domain and hence no choice is required.


Henry

On 26 Oct 2012, at 17:31, heckj wrote:


Bringing conversation for domains in Keystone to the broader mailing
lists.


On Oct 26, 2012, at 5:18 AM, Dolph Mathews dolph.math...@gmail.com
mailto:dolph.math...@gmail.com wrote:

I think this discussion would be great for both mailing lists.

-Dolph


On Fri, Oct 26, 2012 at 5:18 AM, Henry Nash henry.n...@mac.com
mailto:henry.n...@mac.com wrote:

Hi

Not sure where best to have this discussion - here, as a comment
to the v3api doc, or elsewhere - appreciate some guidance and
will transfer this to the right place

At the Summit we started a discussion on whether things like user
name, tenant name etc. should be globally unique or unique within
a domain.  I'd like to widen that discussion to try and a) agree
a direction, b) agree some changes to our current spec. Here's my
view as an opening gambit:

- When a Keystone instance is first started, there is only one,
default, Domain.  The Cloud Provider does not need to create any
new domains

Re: [Openstack] [keystone] Re: Domain Name Spaces

2012-10-30 Thread David Chadwick

On 26/10/2012 17:31, heckj wrote:

Bringing conversation for domains in Keystone to the broader mailing lists.


On Oct 26, 2012, at 5:18 AM, Dolph Mathews dolph.math...@gmail.com
mailto:dolph.math...@gmail.com wrote:

I think this discussion would be great for both mailing lists.

-Dolph


On Fri, Oct 26, 2012 at 5:18 AM, Henry Nash henry.n...@mac.com
mailto:henry.n...@mac.com wrote:

Hi

Not sure where best to have this discussion - here, as a comment
to the v3api doc, or elsewhere - appreciate some guidance and will
transfer this to the right place

At the Summit we started a discussion on whether things like user
name, tenant name etc. should be globally unique or unique within
a domain.  I'd like to widen that discussion to try and a) agree a
direction, b) agree some changes to our current spec. Here's my
view as an opening gambit:

- When a Keystone instance is first started, there is only one,
default, Domain.  The Cloud Provider does not need to create any
new domains, all projects can exist in this default domain, as
will the users etc.  There is one, global, name space.  Clients
using the v2 API will work just fine.


+1


Very much what we were thinking for the initial implemenation and
rollout to make it backwards compatible with the V2 (non-domain) core API


- If the Cloud Provider wants to provide their customers with
regions they can administer themselves and be self-contained, then
they create a Domain for each customer.  It should be possible for
users/roles to be scoped to a Domain so that (effectively)
administrative duties can be delegated to some users in that
Domain.  So far so good - all this can be done with the v3 API.


Not clear on if you're referring to endpoint regions, or just
describing domain isolation?


I believe you're describing the key use cases behind the domains
mechanism to begin with - user and project partitioning to allow for
administration of those to be clearly owned and managed appropriately.



- We still have work to do to make sure items in other OS projects
that reference tenants (e.g. Images) can take a Domain or Project
ID, but we'll get to that soon enough


Everything will continue to work with projects, but once middleware
starts providing a DOMAIN_ID and DOMAIN_NAME to the underlying
service, it'll be up to them to take advantage of it. Images per
domain is an excellent example use case.



- However, Cloud Providers want to start enabling enterprise
customers to run more and more of the workloads in OpenStack
clouds - over and above, the smaller sized companies that are
doing this today.  For this to work, the encapsulation of a Domain
need, I think, to be able to be stricter - and this is where the
name space comes into play.  I think we need to allow for a Domain
to have its own namespace (i.e. users, roles, projects etc.) as an
option.  I see this as a first step to allowing each Domain to
have its own AuthZ/N service (.e.g external ldap owned and hosted
by the customer who will be using the Domain)

Implementation:

- A simplistic version would just allow a flag to specified on
Domain creation that said whether this a private or shared
Domain.  Shared would use the current global name space (and
probably be the default for compatibility reasons).


I like the direction of this -- need to digest implications :)


I like the idea conceptually - but let's be clear on the implications to
the end users:

Where we're starting is preserving a global name space for project names
and user names. Allowing a mix of segregated and global name spaces
imposes a burden of additional data being needed to uniquely place
authentication and authorization.

We've been keeping to 2 key pieces of info (username, password) to get
authenticated - and then (via CLI or Horizon dashboard) you can choose
from a list of protential projects and carry on. In most practical
circumstances, any user working primarily from the CLI is already
providing 3-4 pieces of information:

* username
* password
* tenant name
* auth_url


In fact these are all name/value pairs, so they can all be regarded as 
attribute names and values (or types and values in LDAP terminology).


The attribute names/types have to be globally unique. I think you have 
implicitly mandated this in Keystone by defining the names yourself, and 
by not allowing other names to be used. I presume that currently it 
would not be meaningful to pass a value of

* age
via the CLI. But it should be, since one might have an authz policy that 
bases its decision on the age of the user.


So how about considering a more generic interface where any attribute 
name and value can be passed, and the authz service will use these to 
see if they fit the policy or not.


regards

David



to access and use the cloud.

By allowing domains to be their own namespaces, we're adding

Re: [Openstack] [keystone] Domain Name Spaces

2012-10-30 Thread David Chadwick

Hi Gabriel

there is something of an oxymoron in one of your statements below By 
design, authentication will fail if they don't specify a domain (since 
you won't exist in the global domain)


If the global domain is truly global then it should encompass all public 
and private (sub)domains. Otherwise it is not global.


It is quite easy to include private name spaces in a global name space 
by using hierarchical naming. Firstly ensure that domain names match the 
naming of the global name space. Secondly append the name of the private 
domain to that of the local name to turn the latter into a global name.


If you are familiar with Eduroam, the pan_European wireless 
authentication infrastructure, this is precisely what it does.


When I log in at Kent, I use my kent user id and password. When I log in 
to Eduroam from somewhere else in Europe (or even at kent) I use my kent 
user id and prepend @kent.ac.uk, and the infrastructure automatically 
routes my request and pw to the kent authentication server for 
validation (via Radius).


We should be considering this sort of federated feature (or something 
like it) for Keystone with domains


regards

David


On 30/10/2012 08:00, Henry Nash wrote:

Gabriel,

So I think you are right to ask that this is made clear and concrete -
I'll work with the core contributors of Keystone to make it so.

To your specific point:
- Let's call the initial Domain, the Global Domain, rather than the
default domain
- If the Cloud Provider doesn't explicitly create any domains, then
everything exists in the Global Domain.  There is no need to specify a
domain in any calls, since everything will default to the Global domain.
  The v2 API will work just fine (which knows nothing about domains)
- If they do create some domains, then they indicate (on creation)
whether each of these /share/ the namespace of the Global domain, or
have their own /private/ namespace.
- If all of these new domains were specified as /shared/ then all user
and tenant names are still globally unique.  A caller still does not
technically need to specify a domain, although scoping things down to a
domain (or of course project) is likely for most operations (just like
it is today)
- If, however, some of these new domains were specified as /private/
then any users who are part of a private domain must specify the domain
in order to authenticate.  By design, authentication will fail if they
don't specify a domain (since you won't exist in the global domain).
  Once a user in a private domain is authenticated, they are scoped to
that domain. [implementation: we need to work out whether the domainID
is encoded in the token - this is my assumption since this means the
Domain Name/ID is NOT required for subsequent requestsand
validation, by Keystone, can still be achieved ]
- It is perfectly possible (but of course up to the Cloud Provider) to
support a mixture of /shared/ and /private/ domains (representing
different customer types)but the point being that the Cloud Provider
will tell their customers how they should access they system (i.e.
provide them with any domain specification that may or may not be required).

Very keen to hear other concerns you may have.

Henry
On 27 Oct 2012, at 21:22, Gabriel Hurley wrote:


There are various options for how Horizon can handle the UX problems
associated with adding additional domains. Making it a part of the URL
is one which could be supported, but I’m not inclined to make that the
only method. The implementation details can be hashed out when we get
there.
I am more concerned about the experience for CLI/API users; adding
more parameters they have to pass is quite unfriendly. And I have to
say that Keystone’s track record for handling “default” options has
been quite poor (see “default tenant”). The mixed support for lookups
via ID vs. name is also a mess. There needs to be consistency around
what is unique and in what scope (which is where this thread started).
So far I haven’t heard a concrete answer on that.
For example, if tenants uniqueness is scoped to a domain, and lookups
via tenant name are possible, and there’s a default domain… well
haven’t you just painted yourself into a corner where tenant names in
the default domain must be unique while names in any other domain do
not? It’s these kinds of issues that need to really be thought through.
-Gabriel
*From:*openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net
[mailto:openstack-bounces+gabriel.hurley=nebula@lists.launchpad.net 
mailto:nebula@lists.launchpad.net]*On
Behalf Of*Adam Young
*Sent:*Friday, October 26, 2012 4:19 PM
*To:*Henry Nash
*Cc:*OpenStack Development Mailing List; openstack@lists.launchpad.net
mailto:openstack@lists.launchpad.net (openstack@lists.launchpad.net
mailto:openstack@lists.launchpad.net)
*Subject:*Re: [Openstack] [keystone] Domain Name Spaces
On 10/26/2012 07:17 PM, Henry Nash wrote

Re: [Openstack] Nova middleware for enabling CORS?

2012-10-30 Thread David Kranz

On 10/30/2012 12:43 PM, Renier Morales wrote:

Hello,

I'm wondering if someone has already created a nova paste 
filter/middleware for enabling Cross-Origin Resource Sharing (CORS), 
allowing a web page to access the openstack api from another domain. 
Any pointers out there?


Thanks,

-Renier




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
This https://review.openstack.org/#/c/6909/ was an attempt to add such 
middleware to swift. It is

generic CORS support but seems
to have been rejected in favor of putting CORS support in swift directly 
and checked in last week:

https://github.com/openstack/swift/commit/74b27d504d310c70533175759923c21df158daf9

 -David
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] new mailing list for bare-metal provisioning

2012-10-28 Thread David Kang

 I agree that subject prefix is a way.
There are pros and cons of either approach.
However, when I asked a few of the people who showed interest in bare-metal 
discussion,
a new mailing list was preferred by them.
And we thought a separate mailing list makes people easier to participate and 
to manage the discussion.

 We can discuss this issue again among the people who signed up the new mailing 
list.

 Thanks,
 David
--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

- Original Message -
 David Kang wrote:
 
   Hello all,
 
   An openstack mailing list is created for the discussion of
   bare-metal provisioning.
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-baremetal
 
   Please join it if you are interested in participating the
   dicussion/collaboration
  of bare-metal provisioning.
 
 Hmm, any particular reason why you're not having those discussions on
 the development mailing-list instead ? That sounds a totally
 appropriate
 topic for that list... and the overlap between the two groups sounds
 pretty complete (B totally included in A).
 
 I would prefer if we didn't multiply the sublists for development
 subtopics and if we didn't force developers to subscribe to multiple
 lists just to keep informed on design discussions. That will avoid the
 subgroup coming up with a design that will be rejected by the larger
 group once it is submitted there.
 
 Why not use a subject prefix instead ? Like [baremetal] ?
 
 Regards,
 
 --
 Thierry Carrez (ttx)
 Release Manager, OpenStack
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] new mailing list for bare-metal provisioning

2012-10-26 Thread David Kang

 Hello all,

 An openstack mailing list is created for the discussion of bare-metal 
provisioning.
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-baremetal

 Please join it if you are interested in participating the 
dicussion/collaboration
of bare-metal provisioning.

 Thanks,
 David

--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] HPC session during upcoming Design Summit

2012-10-11 Thread David Kang

 Hi all,

 We will have a design summit session during next week's OpenStack Summit in 
San Diego.
It'll at 11:00 am Tuesday, October 16.
Its title is Scheduler for HPC with OpenStack. 
We have asked to change the title to HPC for OpenStack.

 We will cover wide range of HPC for OpenStack including
 
* HPC extension current state 
* Accelerator support 
* Baremetal 
* Networking 
     * IB
* HPFS (e.g. Lustre) 
* Scheduler extensions 
* Community Requests/Open Discussion 

 Please suggest other interesting topics and share your thoughts in the 
etherpad. 
(http://etherpad.openstack.org/GrizzlyHPC)

 Hope to see you there.

 Thanks,
 David

--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Versioning for notification messages

2012-10-09 Thread David Ripton

On 10/09/2012 01:07 PM, Day, Phil wrote:


What do people think about adding a version number to the notification
systems, so that consumers of notification messages are protected to
some extent from changes in the message contents ?

For example, would it be enough to add a version number to the messages
– or should we have the version number as part of the topic itself (so
that the notification system can provide both a 1.0 and 1.1 feed), etc ?


Putting a version number in the messages is easy, and should work fine. 
 Of course it only really helps if someone writes clients that can deal 
with multiple versions, or at least give helpful error messages when 
they get an unexpected version.


I think using separate topics for each version would be inefficient and 
error-prone.


Inefficient because you'd have to send out multiples of each message, 
some of which would probably never be read.  Obviously, if you're 
sending out N copies of each message then you expect only 1/N the queue 
performance.  Worse, if you're sending out N copies of each message but 
only 1 of them is being consumed, your queue server is using a lot more 
memory than it needs to, to hold onto old messages that nobody needs. 
(If you properly configure a high-water mark or timeout, then the old 
messages will eventually be thrown away.  If you don't, then your queue 
server will eventually consume way too much memory and start swapping, 
your cloud will break, and someone will get paged at 2 a.m.)


Error-prone because someone would end up reusing the notification queue 
code for less idempotent/safe uses of queues, like internal API calls. 
And then client A would pick up the message from topic_v1, and client B 
would pick up the same message from topic_v2, and they'd both perform 
the same API operation, resulting in wasted resources in the best case 
and data corruption in the worst case.


--
David Ripton   Red Hat   drip...@redhat.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] When will the distro (specifically Ubuntu) have package for Folsom release

2012-10-03 Thread David Kranz
I am really confused about this. There are two pages that suggest the 
cloud archive is ready to use:


http://blog.canonical.com/2012/09/14/now-you-can-have-your-openstack-cake-and-eat-it/
https://wiki.ubuntu.com/ServerTeam/CloudArchive

What they tell you to put in /etc/apt/sources.list is different, but 
both give errors like this after

putting the lines in and doing 'apt-get update':

Reading package lists... Done
W: GPG error: http://ubuntu-cloud.archive.canonical.com 
precise-proposed/folsom Release: The following signatures couldn't be 
verified because the public key is not available: NO_PUBKEY 5EDB1B62EC4926EA
W: GPG error: http://ubuntu-cloud.archive.canonical.com 
precise-updates/folsom Release: The following signatures couldn't be 
verified because the public key is not available: NO_PUBKEY 5EDB1B62EC4926EA

u

Can any one in the know explain what the real story about this? Or am I 
just doing something wrong?


 -David



On 10/1/2012 1:20 PM, Nathanael Burton wrote:


From the release notes: 
http://wiki.openstack.org/ReleaseNotes/Folsom#Ubuntu_12.04_.2BAC8_Ubuntu_12.10


On Oct 1, 2012 1:17 PM, Matt Joyce matt.jo...@cloudscaling.com 
mailto:matt.jo...@cloudscaling.com wrote:


I am not sure indecently was the word you were looking for
there.  But I gather you are asking if Ubuntu is packaging folsom
on their own (as in it's not part of openstack).  So yes, Ubuntu
is packaging folsom on their own.  And I assume ubuntu will let
people know when they are done packaging.  They tend to be pretty
good about that sort of thing.

-Matt

On Mon, Oct 1, 2012 at 10:02 AM, Ahmed Al-Mehdi ah...@coraid.com
mailto:ah...@coraid.com wrote:

Hello,

Does anybody know when will the distress, specifically Ubuntu,
have packages for the OpenStack Folsom release.  Is this
effort done indecently of OpenStack by Ubuntu and the release
date will be mentioned on Ubuntu's website?

Regards,
Ahmed.


___
Mailing list: https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
Post to : openstack@lists.launchpad.net
mailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
Post to : openstack@lists.launchpad.net
mailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
https://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-qa-team] Tempest gate is not working

2012-10-02 Thread David Kranz
As of late yesterday, the full tempest gate is running all tempest 
tests. Not surprisingly, there are some failures in the tests that have 
just started running. Most of the problems seem to be due to some recent 
change
in the keystone client but there may be others. We are working to get it 
back up as quickly as possible.


 -David

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


[Openstack-qa-team] Tempest gate situation

2012-10-01 Thread David Kranz
It was recently discovered that the gating job was not running tests 
with no @attr. One of the tests that was not being run as a result of 
this is broken in at least its XML component. It would be great if one 
of the folks who worked on the XML stuff could pick this up soon: 
https://bugs.launchpad.net/tempest/+bug/1059568.


 -David

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] What is going on with test_server_*_ops?

2012-09-28 Thread David Kranz
This was the problem (trivial)  https://review.openstack.org/#/c/13840/. 
Some one please review.

I am not sure when the behavior changed.

 -David

On 9/25/2012 10:59 AM, Dolph Mathews wrote:
That generally pops up when you're bypassing authentication using 
--endpoint  --token (no authentication == no service catalog).


Is it using old command line options to specify auth attributes, which 
were just removed in favor of --os-username, --os-password, etc?


https://github.com/openstack/python-keystoneclient/commit/641f6123624b6ac89182c303dfcb0459b28055a2 



-Dolph


On Tue, Sep 25, 2012 at 9:35 AM, Jay Pipes jaypi...@gmail.com 
mailto:jaypi...@gmail.com wrote:


On 09/25/2012 09:38 AM, David Kranz wrote:
 I heard from some of my team members that test_server_basic_ops and
 test_server_advanced_ops were failing and I can reproduce it with
 current devstack/tempest.
 Looking at the code it seems that the keystone Client object
does not
 have a service_catalog object like the error says. So why is
this not
 failing the tempest build?
 Looking at the transcript of a recent successful build I don't
see any
 evidence that this test is running but I don't know why that
would be.

   -David


==
 ERROR: test suite for class
 'tempest.tests.compute.test_server_basic_ops.TestServerBasicOps'

--
 Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/nose/suite.py, line
208, in run
  self.setUp()
File /usr/lib/python2.7/dist-packages/nose/suite.py, line
291, in setUp
  self.setupContext(ancestor)
File /usr/lib/python2.7/dist-packages/nose/suite.py, line
314, in
 setupContext
  try_run(context, names)
File /usr/lib/python2.7/dist-packages/nose/util.py, line
478, in
 try_run
  return func()
File /opt/stack/tempest/tempest/test.py, line 39, in setUpClass
  cls.manager = cls.manager_class()
File /opt/stack/tempest/tempest/manager.py, line 96, in
__init__
  self.image_client = self._get_image_client()
File /opt/stack/tempest/tempest/manager.py, line 138, in
 _get_image_client
  endpoint =
keystone.service_catalog.url_for(service_type='image',
 AttributeError: 'Client' object has no attribute 'service_catalog'

I wouldn't be surprised if this is due to a change in
python-keystoneclient.

Dolph, was anything changed recently that might have produced this
failure?

Thanks,
-jay






-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] What is going on with test_server_*_ops?

2012-09-28 Thread David Kranz
Thanks, Jay. But this now confirms that test_server_basic_ops is not 
running in the gating job. But it does run when I do 'nosetests -v 
tempest' in my local

environment. How could this be?

 -David

Nothing in the gate log, but this in my local:

test_001_create_keypair 
(tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok
test_002_create_security_group 
(tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok
test_003_boot_instance 
(tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok
test_004_wait_on_active 
(tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok
test_005_pause_server 
(tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok
test_006_unpause_server 
(tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok
test_007_suspend_server 
(tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok
test_008_resume_server 
(tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok
test_099_terminate_instance 
(tempest.tests.compute.test_server_basic_ops.TestServerBasicOps) ... ok




On 9/28/2012 12:12 PM, Jay Pipes wrote:

Approved and merged.

On 09/28/2012 11:51 AM, David Kranz wrote:

This was the problem (trivial)  https://review.openstack.org/#/c/13840/.
Some one please review.
I am not sure when the behavior changed.

  -David

On 9/25/2012 10:59 AM, Dolph Mathews wrote:

That generally pops up when you're bypassing authentication using
--endpoint  --token (no authentication == no service catalog).

Is it using old command line options to specify auth attributes, which
were just removed in favor of --os-username, --os-password, etc?

https://github.com/openstack/python-keystoneclient/commit/641f6123624b6ac89182c303dfcb0459b28055a2


-Dolph


On Tue, Sep 25, 2012 at 9:35 AM, Jay Pipesjaypi...@gmail.com
mailto:jaypi...@gmail.com  wrote:

 On 09/25/2012 09:38 AM, David Kranz wrote:
   I heard from some of my team members that test_server_basic_ops and
   test_server_advanced_ops were failing and I can reproduce it with
   current devstack/tempest.
   Looking at the code it seems that the keystone Client object
 does not
   have a service_catalog object like the error says. So why is
 this not
   failing the tempest build?
   Looking at the transcript of a recent successful build I don't
 see any
   evidence that this test is running but I don't know why that
 would be.
 
 -David
 
 
 ==
   ERROR: test suite forclass
   'tempest.tests.compute.test_server_basic_ops.TestServerBasicOps'
 
 --
   Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/nose/suite.py, line
 208, in run
self.setUp()
  File /usr/lib/python2.7/dist-packages/nose/suite.py, line
 291, in setUp
self.setupContext(ancestor)
  File /usr/lib/python2.7/dist-packages/nose/suite.py, line
 314, in
   setupContext
try_run(context, names)
  File /usr/lib/python2.7/dist-packages/nose/util.py, line
 478, in
   try_run
return func()
  File /opt/stack/tempest/tempest/test.py, line 39, in setUpClass
cls.manager = cls.manager_class()
  File /opt/stack/tempest/tempest/manager.py, line 96, in
 __init__
self.image_client = self._get_image_client()
  File /opt/stack/tempest/tempest/manager.py, line 138, in
   _get_image_client
endpoint =
 keystone.service_catalog.url_for(service_type='image',
   AttributeError: 'Client' object has no attribute 'service_catalog'

 I wouldn't be surprised if this is due to a change in
 python-keystoneclient.

 Dolph, was anything changed recently that might have produced this
 failure?

 Thanks,
 -jay










--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Regarding Tempest for Integration testing of Openstack Environment

2012-09-27 Thread David Kranz

On 9/26/2012 2:55 AM, Girija Sharan wrote:

Hello all,

I am using Tempest *stable/essex* not master. And in stable/essex 
there are very less number of tests as compared to tests in master. 
Would you please suggest me which one should I use.


One important thing is that in master version there are couple of 
tests in network directory but there are no such tests in 
stable/essex. Please explain little bit about purpose of these tests.


Actually I want to test Quantum networks. Will these tests in Tempest 
master be sufficient for that ??


Thanks and Regards,
Girija Sharan Singh


Tempest stable/essex is tracking the stable/essex releases of the 
projects being tested. It is basically the state of tempest as of when 
essex was released with a few updates after that. The main line of 
tempest work since then has been on master which is why there are a lot 
more tests. You should use master. A number of people are working on 
tempest/quantum testing. There was a discussion a week or two ago based 
on this http://etherpad.openstack.org/quantum-tempest. I suggest you 
coordinate with those folks so as to not duplicate effort.


 -David
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-qa-team] Proposals for the Design Summit QA track

2012-09-27 Thread David Kranz
Folks, there are already a number of proposals for sessions that can be 
seen at http://summit.openstack.org/.  I will be reviewing them early 
next week so I encourage any one who wants to lead a session, or has a 
topic that should be discussed, to make a proposal at that page. If you 
want to propose a session topic, but are not going to be at the summit, 
please contact me. Remember that these sessions are not presentations 
with slides, but are intended to be discussions of current or future 
QA-related work or processes.


 -David

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] PyVows proof of concept

2012-09-27 Thread David Kranz
We discussed this a bit at the meeting today. Monty has proposed a 
session on the QA track about parallelizing some of the CI stuff. He 
believes tempest could share the parallelization code. See 
http://summit.openstack.org/cfp/details/69.
Parallelizing the tempest gate job is as much of a ci issue as a tempest 
issue and working with them, and their proposal, could make things much 
easier for us IMO.


 -David

On 9/27/2012 8:10 PM, Daryl Walleck wrote:

I agree on the issue with the output from generated tests. That is troublesome, 
but from what I've seen in the source code, probably something that could be 
remedied. It's also very generous in it's parallel execution which is fine 
client-side, but can overwhelm a test environment since there's no 
configuration to throttle back the number of tests being executed at a time. 
Unfortunately I haven't seen a Python test runner that meets all the criteria 
that I'd like to have, thus this and other little proof of concepts I've been 
tossing around to see if any better approaches are out there.

Daryl

From: openstack-qa-team-bounces+daryl.walleck=rackspace@lists.launchpad.net 
[openstack-qa-team-bounces+daryl.walleck=rackspace@lists.launchpad.net] on 
behalf of Jaroslav Henner [jhen...@redhat.com]
Sent: Monday, September 24, 2012 7:28 AM
To: openstack-qa-team@lists.launchpad.net
Subject: [Openstack-qa-team] PyVows proof of concept

In reply to:
https://lists.launchpad.net/openstack-qa-team/msg00236.html, which
didn't came to my mailbox for some reason (attachment?)

I tried pyVows myself. I kinda liked the concept, but I didn't like the
way it is reporting to JUnit format XML when using generative testing:
http://heynemann.github.com/pyvows/#-using-generative-testing

In Jenkins, it looked like:

Test Result : Add
-
should_be_numeric   0 msPassed
should_be_numeric   0 msPassed
should_be_numeric   0 msPassed
should_be_numeric   0 msPassed
should_be_numeric   0 msPassed
should_be_numeric   0 msPassed

The parameters to the testing method are important when using generative
testing, so I think they should be included in the name of the test. But
some funny characters like
()%* I don't remember which
are causing problems in Jenkins. I was investigating some problems with
them months ago with some other testing framework. I don't know how to
address this problem. It may be worthy to consider making some Robot
framework outputs plugin if generative testing is needed, or use Robot
Framework

https://wiki.jenkins-ci.org/display/JENKINS/Robot+Framework+Plugin

J.H.

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp





--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] PyVows proof of concept

2012-09-27 Thread David Kranz

Agreed. Daryl, is there a  list of these issues some where?



On 9/27/2012 10:54 PM, Daryl Walleck wrote:

I'm certainly all for anything that makes things easier. However, I do want to 
make sure that if we migrate runners, we should make sure that the new 
implementation solves all the issues we're trying to address.

Daryl

From: openstack-qa-team-bounces+daryl.walleck=rackspace@lists.launchpad.net 
[openstack-qa-team-bounces+daryl.walleck=rackspace@lists.launchpad.net] on 
behalf of David Kranz [david.kr...@qrclab.com]
Sent: Thursday, September 27, 2012 8:11 PM
To: openstack-qa-team@lists.launchpad.net
Subject: Re: [Openstack-qa-team] PyVows proof of concept

We discussed this a bit at the meeting today. Monty has proposed a
session on the QA track about parallelizing some of the CI stuff. He
believes tempest could share the parallelization code. See
http://summit.openstack.org/cfp/details/69.
Parallelizing the tempest gate job is as much of a ci issue as a tempest
issue and working with them, and their proposal, could make things much
easier for us IMO.

   -David

On 9/27/2012 8:10 PM, Daryl Walleck wrote:

I agree on the issue with the output from generated tests. That is troublesome, 
but from what I've seen in the source code, probably something that could be 
remedied. It's also very generous in it's parallel execution which is fine 
client-side, but can overwhelm a test environment since there's no 
configuration to throttle back the number of tests being executed at a time. 
Unfortunately I haven't seen a Python test runner that meets all the criteria 
that I'd like to have, thus this and other little proof of concepts I've been 
tossing around to see if any better approaches are out there.

Daryl

From: openstack-qa-team-bounces+daryl.walleck=rackspace@lists.launchpad.net 
[openstack-qa-team-bounces+daryl.walleck=rackspace@lists.launchpad.net] on 
behalf of Jaroslav Henner [jhen...@redhat.com]
Sent: Monday, September 24, 2012 7:28 AM
To: openstack-qa-team@lists.launchpad.net
Subject: [Openstack-qa-team] PyVows proof of concept

In reply to:
https://lists.launchpad.net/openstack-qa-team/msg00236.html, which
didn't came to my mailbox for some reason (attachment?)

I tried pyVows myself. I kinda liked the concept, but I didn't like the
way it is reporting to JUnit format XML when using generative testing:
http://heynemann.github.com/pyvows/#-using-generative-testing

In Jenkins, it looked like:

Test Result : Add
-
should_be_numeric   0 msPassed
should_be_numeric   0 msPassed
should_be_numeric   0 msPassed
should_be_numeric   0 msPassed
should_be_numeric   0 msPassed
should_be_numeric   0 msPassed

The parameters to the testing method are important when using generative
testing, so I think they should be included in the name of the test. But
some funny characters like
()%* I don't remember which
are causing problems in Jenkins. I was investigating some problems with
them months ago with some other testing framework. I don't know how to
address this problem. It may be worthy to consider making some Robot
framework outputs plugin if generative testing is needed, or use Robot
Framework

https://wiki.jenkins-ci.org/display/JENKINS/Robot+Framework+Plugin

J.H.

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp




--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp




--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ubuntu Cloud Archive information

2012-09-25 Thread David Kranz

On 9/24/2012 9:38 PM, Chuck Short wrote:

Hi

On 12-09-24 07:39 PM, Sam Morrison wrote:

Hi,

I've started using the Ubuntu Cloud Archive packages for Folsom in 
Precise.
Haven't been able to find out much information about them so I'm 
asking here.


I've found the packages have quite a few bugs eg.[1]. So trying to
figure out where to submit bugs for these and also where the sources
are for these packages so I can fix them.


You are doing it in the right place, please submit any bugs that you 
find in launchpad.



Doe anyone know anything about these packages?


What do you want to know?
Chuck, we have been testing a system with 
ppa:openstack-ubuntu-testing/folsom-trunk-testing and I have two questions:


1. When will the release version of Folsom be available using the method 
described in https://wiki.ubuntu.com/ServerTeam/CloudArchive?
2. Will it be possible to upgrade a system using the test ppa to the 
final release in the CloudArchive (and, if so, how)?


Thanks,

David


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-qa-team] What is going on with test_server_*_ops?

2012-09-25 Thread David Kranz
I heard from some of my team members that test_server_basic_ops and 
test_server_advanced_ops were failing and I can reproduce it with 
current devstack/tempest.
Looking at the code it seems that the keystone Client object does not 
have a service_catalog object like the error says. So why is this not 
failing the tempest build?
Looking at the transcript of a recent successful build I don't see any 
evidence that this test is running but I don't know why that would be.


 -David

==
ERROR: test suite for class 
'tempest.tests.compute.test_server_basic_ops.TestServerBasicOps'

--
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/nose/suite.py, line 208, in run
self.setUp()
  File /usr/lib/python2.7/dist-packages/nose/suite.py, line 291, in setUp
self.setupContext(ancestor)
  File /usr/lib/python2.7/dist-packages/nose/suite.py, line 314, in 
setupContext

try_run(context, names)
  File /usr/lib/python2.7/dist-packages/nose/util.py, line 478, in 
try_run

return func()
  File /opt/stack/tempest/tempest/test.py, line 39, in setUpClass
cls.manager = cls.manager_class()
  File /opt/stack/tempest/tempest/manager.py, line 96, in __init__
self.image_client = self._get_image_client()
  File /opt/stack/tempest/tempest/manager.py, line 138, in 
_get_image_client

endpoint = keystone.service_catalog.url_for(service_type='image',
AttributeError: 'Client' object has no attribute 'service_catalog'


--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] What is going on with test_server_*_ops?

2012-09-25 Thread David Kranz

On 9/25/2012 10:35 AM, Jay Pipes wrote:

On 09/25/2012 09:38 AM, David Kranz wrote:

I heard from some of my team members that test_server_basic_ops and
test_server_advanced_ops were failing and I can reproduce it with
current devstack/tempest.
Looking at the code it seems that the keystone Client object does not
have a service_catalog object like the error says. So why is this not
failing the tempest build?
Looking at the transcript of a recent successful build I don't see any
evidence that this test is running but I don't know why that would be.

   -David

==
ERROR: test suite forclass
'tempest.tests.compute.test_server_basic_ops.TestServerBasicOps'
--
Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/nose/suite.py, line 208, in run
  self.setUp()
File /usr/lib/python2.7/dist-packages/nose/suite.py, line 291, in setUp
  self.setupContext(ancestor)
File /usr/lib/python2.7/dist-packages/nose/suite.py, line 314, in
setupContext
  try_run(context, names)
File /usr/lib/python2.7/dist-packages/nose/util.py, line 478, in
try_run
  return func()
File /opt/stack/tempest/tempest/test.py, line 39, in setUpClass
  cls.manager = cls.manager_class()
File /opt/stack/tempest/tempest/manager.py, line 96, in __init__
  self.image_client = self._get_image_client()
File /opt/stack/tempest/tempest/manager.py, line 138, in
_get_image_client
  endpoint = keystone.service_catalog.url_for(service_type='image',
AttributeError: 'Client' object has no attribute 'service_catalog'

I wouldn't be surprised if this is due to a change in python-keystoneclient.

Dolph, was anything changed recently that might have produced this failure?

Thanks,
-jay

That is probably so but even when that is verified, I am still concerned 
that I see this with a fresh checkout of devstack/tempest but this is 
not failing jenkins

runs. How could that happen?

  -David

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


[Openstack] Logging doesn't produce anything

2012-09-11 Thread David Krider
I've followed the instructions for setting logging using the local0 and local1 
facilities. I've modified rsyslogd's config to add those logs. I get the 
startup info for each thread, but that's it. Is there a common oversight in the 
extant docs that's leaving out a piece of crucial info? I want LOTS of detail 
to try to figure out why my (3rd-party) client isn't connecting.

Thanks,
dk
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-hpc] [HPC] Reminder monthly telecon Sep. 10

2012-09-10 Thread David Kang

 Hello,

 We will use different webex host today because of some technical problem.
Sorry for that.

 David

--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

- Original Message -
 I am guessing your intent is to determine the maximum available
 bandwidth and lowest latency (commonly implemented as least hops) path
 between hosts. In other platforms there is the notion of Cell, Zone,
 Row, Rack etc where the host that you are running your workload has
 the topology encoded in the host meta information itself.
 
 
 In instances where this is not encoded within some sort of meta of the
 host either shortest path first or constrained shortest path first can
 be run to determine the network, topology and either distributed to
 the nodes. The challenge here is that it is really hard to take into
 account available bandwidth between nodes vs hops.
 
 
 
 
 
 
 Regards,
 
 Colin
 
 If you would like to schedule a time to speak with me, please click
 here to see my calendar and pick a time that works for your schedule.
 The system will automatically send us both an outlook meeting invite.
 Colin McNamara
 (858)208-8105
 CCIE #18233,VCP
 http://www.colinmcnamara.com
 http://www.linkedin.com/in/colinmcnamara
 
 The difficult we do immediately, the impossible just takes a little
 longer
 
 
 
 
 
 
 
 On Sep 7, 2012, at 9:54 AM, Joseph Suh  j...@isi.edu  wrote:
 
 
 All,
 
 I have a blue print on proximity scheduler at
 http://wiki.openstack.org/ProximityScheduler , and would like to get
 feedback on it.
 
 Thanks,
 
 Joseph
 
 - Original Message -
 From: John Paul Walters  jwalt...@isi.edu 
 To: openstack@lists.launchpad.net
 Cc: openstack-...@lists.openstack.org
 Sent: Friday, September 7, 2012 12:12:20 PM
 Subject: [openstack-hpc] [HPC] Reminder monthly telecon Sep. 10
 
 
 Hi,
 
 
 This is a reminder that we'll hold our next monthly HPC telecon this
 coming Monday, Sep. 10 and 12:00 noon Eastern Time. We'll use webex
 (details below). The agenda is somewhat open. Our default will be to
 start the conversation about HPC features that folks are interested in
 adding to the Grizzly release. If anyone has any other specific agenda
 items, they're welcome to propose them.
 
 
 I'm unable to attend, so my colleague David Kang will be hosting this
 meeting. We look forward to talking to you!
 
 
 best,
 JP
 
 
 John Paul Walters invites you to attend this online meeting.
 
 Topic: HPC Monthly Telecon
 Date: Monday, September 10, 2012
 Time: 12:00 pm, Eastern Daylight Time (New York, GMT-04:00)
 Meeting Number: 927 246 497
 Meeting Password: hpcmonthly
 
 
 ---
 To join the online meeting (Now from mobile devices!)
 ---
 1. Go to
 https://openstack.webex.com/openstack/j.php?ED=203524102UID=1431607857PW=NYzljOTEwYThjRT=MiMxMQ%3D%3D
 2. If requested, enter your name and email address.
 3. If a password is required, enter the meeting password: hpcmonthly
 4. Click Join.
 
 To view in other time zones or languages, please click the link:
 https://openstack.webex.com/openstack/j.php?ED=203524102UID=1431607857PW=NYzljOTEwYThjORT=MiMxMQ%3D%3D
 
 
 ___
 OpenStack-HPC mailing list
 openstack-...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp
 
 
 ___
 OpenStack-HPC mailing list
 openstack-...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-hpc] [HPC] Reminder monthly telecon Sep. 10

2012-09-10 Thread David Kang

 While setting up webex, please call
1-866-528-2256; code: 3289628

for the audio conference.

 Thanks,
 David


--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

- Original Message -
 Hello,
 
 We will use different webex host today because of some technical
 problem.
 Sorry for that.
 
 David
 
 --
 Dr. Dong-In David Kang
 Computer Scientist
 USC/ISI
 
 - Original Message -
  I am guessing your intent is to determine the maximum available
  bandwidth and lowest latency (commonly implemented as least hops)
  path
  between hosts. In other platforms there is the notion of Cell, Zone,
  Row, Rack etc where the host that you are running your workload has
  the topology encoded in the host meta information itself.
 
 
  In instances where this is not encoded within some sort of meta of
  the
  host either shortest path first or constrained shortest path first
  can
  be run to determine the network, topology and either distributed to
  the nodes. The challenge here is that it is really hard to take into
  account available bandwidth between nodes vs hops.
 
 
 
 
 
 
  Regards,
 
  Colin
 
  If you would like to schedule a time to speak with me, please click
  here to see my calendar and pick a time that works for your
  schedule.
  The system will automatically send us both an outlook meeting
  invite.
  Colin McNamara
  (858)208-8105
  CCIE #18233,VCP
  http://www.colinmcnamara.com
  http://www.linkedin.com/in/colinmcnamara
 
  The difficult we do immediately, the impossible just takes a little
  longer
 
 
 
 
 
 
 
  On Sep 7, 2012, at 9:54 AM, Joseph Suh  j...@isi.edu  wrote:
 
 
  All,
 
  I have a blue print on proximity scheduler at
  http://wiki.openstack.org/ProximityScheduler , and would like to get
  feedback on it.
 
  Thanks,
 
  Joseph
 
  - Original Message -
  From: John Paul Walters  jwalt...@isi.edu 
  To: openstack@lists.launchpad.net
  Cc: openstack-...@lists.openstack.org
  Sent: Friday, September 7, 2012 12:12:20 PM
  Subject: [openstack-hpc] [HPC] Reminder monthly telecon Sep. 10
 
 
  Hi,
 
 
  This is a reminder that we'll hold our next monthly HPC telecon this
  coming Monday, Sep. 10 and 12:00 noon Eastern Time. We'll use webex
  (details below). The agenda is somewhat open. Our default will be to
  start the conversation about HPC features that folks are interested
  in
  adding to the Grizzly release. If anyone has any other specific
  agenda
  items, they're welcome to propose them.
 
 
  I'm unable to attend, so my colleague David Kang will be hosting
  this
  meeting. We look forward to talking to you!
 
 
  best,
  JP
 
 
  John Paul Walters invites you to attend this online meeting.
 
  Topic: HPC Monthly Telecon
  Date: Monday, September 10, 2012
  Time: 12:00 pm, Eastern Daylight Time (New York, GMT-04:00)
  Meeting Number: 927 246 497
  Meeting Password: hpcmonthly
 
 
  ---
  To join the online meeting (Now from mobile devices!)
  ---
  1. Go to
  https://openstack.webex.com/openstack/j.php?ED=203524102UID=1431607857PW=NYzljOTEwYThjRT=MiMxMQ%3D%3D
  2. If requested, enter your name and email address.
  3. If a password is required, enter the meeting password: hpcmonthly
  4. Click Join.
 
  To view in other time zones or languages, please click the link:
  https://openstack.webex.com/openstack/j.php?ED=203524102UID=1431607857PW=NYzljOTEwYThjORT=MiMxMQ%3D%3D
 
 
  ___
  OpenStack-HPC mailing list
  openstack-...@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help : https://help.launchpad.net/ListHelp
 
 
  ___
  OpenStack-HPC mailing list
  openstack-...@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc
 
 ___
 OpenStack-HPC mailing list
 openstack-...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-hpc] [HPC] Reminder monthly telecon Sep. 10

2012-09-10 Thread David Kang

 Webex,

 https://usc-isi.webex.com/mw0306ld/mywebex/default.do?siteurl=usc-isi

 password: dodcs

 David 

--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

- Original Message -
 While setting up webex, please call
 1-866-528-2256; code: 3289628
 
 for the audio conference.
 
 Thanks,
 David
 
 
 --
 Dr. Dong-In David Kang
 Computer Scientist
 USC/ISI
 
 - Original Message -
  Hello,
 
  We will use different webex host today because of some technical
  problem.
  Sorry for that.
 
  David
 
  --
  Dr. Dong-In David Kang
  Computer Scientist
  USC/ISI
 
  - Original Message -
   I am guessing your intent is to determine the maximum available
   bandwidth and lowest latency (commonly implemented as least hops)
   path
   between hosts. In other platforms there is the notion of Cell,
   Zone,
   Row, Rack etc where the host that you are running your workload
   has
   the topology encoded in the host meta information itself.
  
  
   In instances where this is not encoded within some sort of meta of
   the
   host either shortest path first or constrained shortest path first
   can
   be run to determine the network, topology and either distributed
   to
   the nodes. The challenge here is that it is really hard to take
   into
   account available bandwidth between nodes vs hops.
  
  
  
  
  
  
   Regards,
  
   Colin
  
   If you would like to schedule a time to speak with me, please
   click
   here to see my calendar and pick a time that works for your
   schedule.
   The system will automatically send us both an outlook meeting
   invite.
   Colin McNamara
   (858)208-8105
   CCIE #18233,VCP
   http://www.colinmcnamara.com
   http://www.linkedin.com/in/colinmcnamara
  
   The difficult we do immediately, the impossible just takes a
   little
   longer
  
  
  
  
  
  
  
   On Sep 7, 2012, at 9:54 AM, Joseph Suh  j...@isi.edu  wrote:
  
  
   All,
  
   I have a blue print on proximity scheduler at
   http://wiki.openstack.org/ProximityScheduler , and would like to
   get
   feedback on it.
  
   Thanks,
  
   Joseph
  
   - Original Message -
   From: John Paul Walters  jwalt...@isi.edu 
   To: openstack@lists.launchpad.net
   Cc: openstack-...@lists.openstack.org
   Sent: Friday, September 7, 2012 12:12:20 PM
   Subject: [openstack-hpc] [HPC] Reminder monthly telecon Sep. 10
  
  
   Hi,
  
  
   This is a reminder that we'll hold our next monthly HPC telecon
   this
   coming Monday, Sep. 10 and 12:00 noon Eastern Time. We'll use
   webex
   (details below). The agenda is somewhat open. Our default will be
   to
   start the conversation about HPC features that folks are
   interested
   in
   adding to the Grizzly release. If anyone has any other specific
   agenda
   items, they're welcome to propose them.
  
  
   I'm unable to attend, so my colleague David Kang will be hosting
   this
   meeting. We look forward to talking to you!
  
  
   best,
   JP
  
  
   John Paul Walters invites you to attend this online meeting.
  
   Topic: HPC Monthly Telecon
   Date: Monday, September 10, 2012
   Time: 12:00 pm, Eastern Daylight Time (New York, GMT-04:00)
   Meeting Number: 927 246 497
   Meeting Password: hpcmonthly
  
  
   ---
   To join the online meeting (Now from mobile devices!)
   ---
   1. Go to
   https://openstack.webex.com/openstack/j.php?ED=203524102UID=1431607857PW=NYzljOTEwYThjRT=MiMxMQ%3D%3D
   2. If requested, enter your name and email address.
   3. If a password is required, enter the meeting password:
   hpcmonthly
   4. Click Join.
  
   To view in other time zones or languages, please click the link:
   https://openstack.webex.com/openstack/j.php?ED=203524102UID=1431607857PW=NYzljOTEwYThjORT=MiMxMQ%3D%3D
  
  
   ___
   OpenStack-HPC mailing list
   openstack-...@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help : https://help.launchpad.net/ListHelp
  
  
   ___
   OpenStack-HPC mailing list
   openstack-...@lists.openstack.org
   http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc
 
  ___
  OpenStack-HPC mailing list
  openstack-...@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc
 
 ___
 OpenStack-HPC mailing list
 openstack-...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-hpc

___
Mailing list: https://launchpad.net

[Openstack-qa-team] Need to rebase

2012-09-06 Thread David Kranz
As of a few hours ago the tempest gate is unblocked. However, it seems 
that all the pending changes need to be rebased.


 -David

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


[Openstack] Can't change X-Storage-Url from localhost

2012-08-28 Thread David Krider
I seem to be having this exact problem, but the fix doesn't work for me:

https://answers.launchpad.net/swift/+question/157858

No matter what I set the default_swift_cluster to, or if I add a bind_ip
to the DEFAULT section, I can't get X-Storage-Url to come back as
anything other than localhost:

dkrider@workstation:~$ curl -k -v -H 'X-Storage-User: test:tester' -H
'X-Storage-Pass: testing' https://external_ip:8080/auth/v1.0
* About to connect() to external_ip port 8080 (#0)
*   Trying external_ip... connected
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using AES256-SHA
* Server certificate:
*  subject: C=AU; ST=Some-State; O=Internet Widgits Pty Ltd
*  start date: 2012-08-14 13:51:32 GMT
*  expire date: 2012-09-13 13:51:32 GMT
* SSL: unable to obtain common name from peer certificate
 GET /auth/v1.0 HTTP/1.1
 User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
 Host: external_ip:8080
 Accept: */*
 X-Storage-User: test:tester
 X-Storage-Pass: testing

 HTTP/1.1 200 OK
 X-Storage-Url:
https://127.0.0.1:8080/v1/AUTH_e6ecde05-959a-4898-907b-5bec495fa4f0
 X-Storage-Token: AUTH_tk36c97915aed242b7b9a93aa05c06ba0c
 X-Auth-Token: AUTH_tk36c97915aed242b7b9a93aa05c06ba0c
 Content-Length: 113
 Date: Tue, 28 Aug 2012 18:38:34 GMT

* Connection #0 to host external_ip left intact
* Closing connection #0
* SSLv3, TLS alert, Client hello (1):
{storage: {default: local, local:
https://127.0.0.1:8080/v1/AUTH_e6ecde05-959a-4898-907b-5bec495fa4f0}}

Have I run into a bug, or is there something simple I'm overlooking in
the config file?

/etc/swift/proxy-server.conf
-
[DEFAULT]
cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key
bind_port = 8080
workers = 8
user = swift

[pipeline:main]
pipeline = healthcheck cache swauth proxy-server

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:cache]
use = egg:swift#memcache
memcache_servers = 10.1.7.10:11211,10.1.7.11:11211

[filter:swauth]
use = egg:swauth#swauth
set_log_level = DEBUG
super_admin_key = asdfqwer
default_swift_cluster =
local#https://external_ip:8080/v1#https://127.0.0.1:8080/v1

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true
log_level = DEBUG
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Can't change X-Storage-Url from localhost

2012-08-28 Thread David Krider
I finally found the place to search the archives. I think this is the
answer:

https://answers.launchpad.net/swift/+question/148450

I will have a play.

On 08/28/2012 02:56 PM, David Krider wrote:
 I seem to be having this exact problem, but the fix doesn't work for me:

 https://answers.launchpad.net/swift/+question/157858

 No matter what I set the default_swift_cluster to, or if I add a
 bind_ip to the DEFAULT section, I can't get X-Storage-Url to come back
 as anything other than localhost:

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-27 Thread David Kang

 Vish,

 I think I don't understand your statement fully.
Unless we use different hostnames, (hostname, hypervisor_hostname) must be the 
same for all bare-metal nodes under a bare-metal nova-compute.

 Could you elaborate the following statement a little bit more?

 You would just have to use a little more than hostname. Perhaps
 (hostname, hypervisor_hostname) could be used to update the entry?
 

 Thanks,
 David



- Original Message -
 I would investigate changing the capabilities to key off of something
 other than hostname. It looks from the table structure like
 compute_nodes could be have a many-to-one relationship with services.
 You would just have to use a little more than hostname. Perhaps
 (hostname, hypervisor_hostname) could be used to update the entry?
 
 Vish
 
 On Aug 24, 2012, at 11:23 AM, David Kang dk...@isi.edu wrote:
 
 
   Vish,
 
   I've tested your code and did more testing.
  There are a couple of problems.
  1. host name should be unique. If not, any repetitive updates of new
  capabilities with the same host name are simply overwritten.
  2. We cannot generate arbitrary host names on the fly.
    The scheduler (I tested filter scheduler) gets host names from db.
    So, if a host name is not in the 'services' table, it is not
    considered by the scheduler at all.
 
  So, to make your suggestions possible, nova-compute should register
  N different host names in 'services' table,
  and N corresponding entries in 'compute_nodes' table.
  Here is an example:
 
  mysql select id, host, binary, topic, report_count, disabled,
  availability_zone from services;
  ++-++---+--+--+---+
  | id | host | binary | topic | report_count | disabled |
  | availability_zone |
  ++-++---+--+--+---+
  |  1 | bespin101 | nova-scheduler | scheduler | 17145 | 0 | nova |
  |  2 | bespin101 | nova-network | network | 16819 | 0 | nova |
  |  3 | bespin101-0 | nova-compute | compute | 16405 | 0 | nova |
  |  4 | bespin101-1 | nova-compute | compute | 1 | 0 | nova |
  ++-++---+--+--+---+
 
  mysql select id, service_id, hypervisor_hostname from
  compute_nodes;
  ++++
  | id | service_id | hypervisor_hostname |
  ++++
  |  1 | 3 | bespin101.east.isi.edu |
  |  2 | 4 | bespin101.east.isi.edu |
  ++++
 
   Then, nova db (compute_nodes table) has entries of all bare-metal
   nodes.
  What do you think of this approach.
  Do you have any better approach?
 
   Thanks,
   David
 
 
 
  - Original Message -
  To elaborate, something the below. I'm not absolutely sure you need
  to
  be able to set service_name and host, but this gives you the option
  to
  do so if needed.
 
  iff --git a/nova/manager.py b/nova/manager.py
  index c6711aa..c0f4669 100644
  --- a/nova/manager.py
  +++ b/nova/manager.py
  @@ -217,6 +217,8 @@ class SchedulerDependentManager(Manager):
 
  def update_service_capabilities(self, capabilities):
  Remember these capabilities to send on next periodic update.
  + if not isinstance(capabilities, list):
  + capabilities = [capabilities]
  self.last_capabilities = capabilities
 
  @periodic_task
  @@ -224,5 +226,8 @@ class SchedulerDependentManager(Manager):
  Pass data back to the scheduler at a periodic interval.
  if self.last_capabilities:
  LOG.debug(_('Notifying Schedulers of capabilities ...'))
  - self.scheduler_rpcapi.update_service_capabilities(context,
  - self.service_name, self.host, self.last_capabilities)
  + for capability_item in self.last_capabilities:
  + name = capability_item.get('service_name', self.service_name)
  + host = capability_item.get('host', self.host)
  + self.scheduler_rpcapi.update_service_capabilities(context,
  + name, host, capability_item)
 
  On Aug 21, 2012, at 1:28 PM, David Kang dk...@isi.edu wrote:
 
 
   Hi Vish,
 
   We are trying to change our code according to your comment.
  I want to ask a question.
 
  a) modify driver.get_host_stats to be able to return a list of
  host
  stats instead of just one. Report the whole list back to the
  scheduler. We could modify the receiving end to accept a list
  as
  well
  or just make multiple calls to
  self.update_service_capabilities(capabilities)
 
   Modifying driver.get_host_stats to return a list of host stats is
   easy.
  Calling muliple calls to
  self.update_service_capabilities(capabilities) doesn't seem to
  work,
  because 'capabilities' is overwritten each time.
 
   Modifying the receiving end to accept a list seems to be easy.
  However, 'capabilities' is assumed to be dictionary by all other
  scheduler routines,
  it looks like that we have to change all of them to handle
  'capability' as a list of dictionary.
 
   If my

Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-27 Thread David Kang

 Hi Vish,

 I think I understand your idea.
One service entry with multiple bare-metal compute_node entries are registered 
at the start of bare-metal nova-compute.
'hypervisor_hostname' must be different for each bare-metal machine, such as 
'bare-metal-0001.xxx.com', 'bare-metal-0002.xxx.com', etc.)
But their IP addresses must be the IP address of bare-metal nova-compute, such 
that an instance is casted 
not to bare-metal machine directly but to bare-metal nova-compute.

 One extension we need to do at the scheduler side is using (host, 
hypervisor_hostname) instead of (host) only in host_manager.py.
'HostManager.service_state' is { host : { service  : { cap k : v }}}.
It needs to be changed to { host : { service : { hypervisor_name : { cap 
k : v .

Most functions of HostState need to be changed to use (host, hypervisor_name) 
pair to identify a compute node. 

 Are we on the same page, now?

 Thanks,
 David

- Original Message -
 Hi David,
 
 I just checked out the code more extensively and I don't see why you
 need to create a new service entry for each compute_node entry. The
 code in host_manager to get all host states explicitly gets all
 compute_node entries. I don't see any reason why multiple compute_node
 entries can't share the same service. I don't see any place in the
 scheduler that is grabbing records by service instead of by compute
 node, but if there is one that I missed, it should be fairly easy to
 change it.
 
 The compute_node record is created in the compute/resource_tracker.py
 as of a recent commit, so I think the path forward would be to make
 sure that one of the records is created for each bare metal node by
 the bare metal compute, perhaps by having multiple resource_trackers.
 
 Vish
 
 On Aug 27, 2012, at 9:40 AM, David Kang dk...@isi.edu wrote:
 
 
   Vish,
 
   I think I don't understand your statement fully.
  Unless we use different hostnames, (hostname, hypervisor_hostname)
  must be the
  same for all bare-metal nodes under a bare-metal nova-compute.
 
   Could you elaborate the following statement a little bit more?
 
  You would just have to use a little more than hostname. Perhaps
  (hostname, hypervisor_hostname) could be used to update the entry?
 
 
   Thanks,
   David
 
 
 
  - Original Message -
  I would investigate changing the capabilities to key off of
  something
  other than hostname. It looks from the table structure like
  compute_nodes could be have a many-to-one relationship with
  services.
  You would just have to use a little more than hostname. Perhaps
  (hostname, hypervisor_hostname) could be used to update the entry?
 
  Vish
 
  On Aug 24, 2012, at 11:23 AM, David Kang dk...@isi.edu wrote:
 
 
   Vish,
 
   I've tested your code and did more testing.
  There are a couple of problems.
  1. host name should be unique. If not, any repetitive updates of
  new
  capabilities with the same host name are simply overwritten.
  2. We cannot generate arbitrary host names on the fly.
The scheduler (I tested filter scheduler) gets host names from
db.
So, if a host name is not in the 'services' table, it is not
considered by the scheduler at all.
 
  So, to make your suggestions possible, nova-compute should
  register
  N different host names in 'services' table,
  and N corresponding entries in 'compute_nodes' table.
  Here is an example:
 
  mysql select id, host, binary, topic, report_count, disabled,
  availability_zone from services;
  ++-++---+--+--+---+
  | id | host | binary | topic | report_count | disabled |
  | availability_zone |
  ++-++---+--+--+---+
  |  1 | bespin101 | nova-scheduler | scheduler | 17145 | 0 | nova |
  |  2 | bespin101 | nova-network | network | 16819 | 0 | nova |
  |  3 | bespin101-0 | nova-compute | compute | 16405 | 0 | nova |
  |  4 | bespin101-1 | nova-compute | compute | 1 | 0 | nova |
  ++-++---+--+--+---+
 
  mysql select id, service_id, hypervisor_hostname from
  compute_nodes;
  ++++
  | id | service_id | hypervisor_hostname |
  ++++
  |  1 | 3 | bespin101.east.isi.edu |
  |  2 | 4 | bespin101.east.isi.edu |
  ++++
 
   Then, nova db (compute_nodes table) has entries of all bare-metal
   nodes.
  What do you think of this approach.
  Do you have any better approach?
 
   Thanks,
   David
 
 
 
  - Original Message -
  To elaborate, something the below. I'm not absolutely sure you
  need
  to
  be able to set service_name and host, but this gives you the
  option
  to
  do so if needed.
 
  iff --git a/nova/manager.py b/nova/manager.py
  index c6711aa..c0f4669 100644
  --- a/nova/manager.py
  +++ b/nova/manager.py
  @@ -217,6

Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-27 Thread David Kang

 Michael,

 It is a little confusing without knowing the assumptions of your suggestions.
First of all, I want to make sure that you agree on the followings:
1. one entry for a bare-metal machines in the 'compute_node' table.
2. one entry for bare-metal nova-compute that manages N bare-metal machines in 
the 'service' table.

In addition to that I think you suggest augmenting 'host' field in the 
'service' table,
such that 'host' field can be used for RPC.
(I don't think the current 'host' field can be used for that purpose now.)

 David

- Original Message -
 David Kang dk...@isi.edu wrote on 08/27/2012 05:22:37 PM:
 
  From: David Kang dk...@isi.edu
  To: Michael J Fork/Rochester/IBM@IBMUS,
  Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
  openstack@lists.launchpad.net, openstack-bounces+mjfork=us ibm com
  openstack-bounces+mjfork=us.ibm@lists.launchpad.net, OpenStack
  Development Mailing List openstack-...@lists.openstack.org,
  Vishvananda Ishaya vishvana...@gmail.com
  Date: 08/27/2012 05:22 PM
  Subject: Re: [Openstack] [openstack-dev] Discussion about where to
  put database for bare-metal provisioning (review 10726)
 
 
  Michael,
 
  I think you mean compute_node hostname as 'hypervisor_hostname'
  field in the 'compute_node' table.
 
 Yes. This value would be part of the payload of the message cast to
 the proxy node so that it knows who the request was directed to.
 
  What do you mean by service hostname?
  I don't see such field in the 'service' table in the database.
  Is it in some other table?
  Or do you suggest adding 'service_hostname' field in the 'service'
  table?
 
 The host field in the services table. This value would be used as
 the target of the rpc cast so that the proxy node would receive the
 message.
 
 
  Thanks,
  David
 
  - Original Message -
   openstack-bounces+mjfork=us.ibm@lists.launchpad.net wrote on
   08/27/2012 02:58:56 PM:
  
From: David Kang dk...@isi.edu
To: Vishvananda Ishaya vishvana...@gmail.com,
Cc: OpenStack Development Mailing List openstack-
d...@lists.openstack.org, openstack@lists.launchpad.net \
(openstack@lists.launchpad.net\)
openstack@lists.launchpad.net
Date: 08/27/2012 03:06 PM
Subject: Re: [Openstack] [openstack-dev] Discussion about where
to
put database for bare-metal provisioning (review 10726)
Sent by: openstack-bounces+mjfork=us.ibm@lists.launchpad.net
   
   
Hi Vish,
   
I think I understand your idea.
One service entry with multiple bare-metal compute_node entries
are
registered at the start of bare-metal nova-compute.
'hypervisor_hostname' must be different for each bare-metal
machine,
such as 'bare-metal-0001.xxx.com', 'bare-metal-0002.xxx.com',
etc.)
But their IP addresses must be the IP address of bare-metal
nova-
compute, such that an instance is casted
not to bare-metal machine directly but to bare-metal
nova-compute.
  
   I believe the change here is to cast out the message to the
   topic.service-hostname. Existing code sends it to the
   compute_node
   hostname (see line 202 of nova/scheduler/filter_scheduler.py,
   specifically host=weighted_host.host_state.host). Changing that to
   cast to the service hostname would send the message to the
   bare-metal
   proxy node and should not have an effect on current deployments
   since
   the service hostname and the host_state.host would always be
   equal.
   This model will also let you keep the bare-metal compute node IP
   in
   the compute node table.
  
One extension we need to do at the scheduler side is using
(host,
hypervisor_hostname) instead of (host) only in host_manager.py.
'HostManager.service_state' is { host : { service  : { cap k
: v
}}}.
It needs to be changed to { host : { service : {
hypervisor_name : { cap k : v .
Most functions of HostState need to be changed to use (host,
hypervisor_name) pair to identify a compute node.
  
   Would an alternative here be to change the top level host to be
   the
   hypervisor_hostname and enforce uniqueness?
  
Are we on the same page, now?
   
Thanks,
David
   
- Original Message -
 Hi David,

 I just checked out the code more extensively and I don't see
 why
 you
 need to create a new service entry for each compute_node
 entry.
 The
 code in host_manager to get all host states explicitly gets
 all
 compute_node entries. I don't see any reason why multiple
 compute_node
 entries can't share the same service. I don't see any place in
 the
 scheduler that is grabbing records by service instead of by
 compute
 node, but if there is one that I missed, it should be fairly
 easy
 to
 change it.

 The compute_node record is created in the
 compute/resource_tracker.py
 as of a recent commit, so I think the path forward

Re: [Openstack] [Nova] Instance Type Extra Specs clarifications

2012-08-24 Thread David Kang

 Parick,

 We are using the feature in Bare-metal machine provisioning.
Some keys are automatically generated by nova-compute.
For example, hypervisor_type, hypervisor_version, etc. fields are 
automatically
put into capabilities by nova-compute (in the case of libvirt).
So, you don't need to specify that.
But, if you want to add custom fields, you should put them into nova.conf file 
of 
the nova-compute node.

 Since the new key are put into 'capabilities', 
the new key must be different from any other keys in the 'capabilities'.
If that uniqueness is enforced, it can be any string, I believe.

 Thanks,
 David

--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

- Original Message -
 Hi,
 
 
 Could someone give a practical overview of how configuring and using
 the instance type extra specs extension capability introduced in
 Folsom?
 
 
 If how extending an instance type is relatively clear.
 
 
 Eg.: #nova-manage instance_type set_key --name=my.instancetype --key
 cpu_arch --value 's== x86_64'
 
 
 The principles of capability advertising is less clearer. Is it
 assumed that the key/value pairs are always declared statically as
 flags in nova.conf of the compute node, or can they be generated
 dynamically and if so, who would that be? And also, are the keys
 completely free form strings or strings that are known (reserved) by
 Nova?
 
 
 Thanks in advance for clarifying this.
 
 
 Patrick
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-24 Thread David Kang

 Vish,

 I've tested your code and did more testing.
There are a couple of problems.
1. host name should be unique. If not, any repetitive updates of new 
capabilities with the same host name are simply overwritten.
2. We cannot generate arbitrary host names on the fly.
  The scheduler (I tested filter scheduler) gets host names from db.
  So, if a host name is not in the 'services' table, it is not considered by 
the scheduler at all.

So, to make your suggestions possible, nova-compute should register N different 
host names in 'services' table,
and N corresponding entries in 'compute_nodes' table.
Here is an example:

mysql select id, host, binary, topic, report_count, disabled, 
availability_zone from services;
++-++---+--+--+---+
| id | host        | binary         | topic     | report_count | disabled | 
availability_zone |
++-++---+--+--+---+
|  1 | bespin101   | nova-scheduler | scheduler |        17145 |        0 | 
nova              |
|  2 | bespin101   | nova-network   | network   |        16819 |        0 | 
nova              |
|  3 | bespin101-0 | nova-compute   | compute   |        16405 |        0 | 
nova              |
|  4 | bespin101-1 | nova-compute   | compute   |            1 |        0 | 
nova              |
++-++---+--+--+---+

mysql select id, service_id, hypervisor_hostname from compute_nodes;
++++
| id | service_id | hypervisor_hostname    |
++++
|  1 |          3 | bespin101.east.isi.edu |
|  2 |          4 | bespin101.east.isi.edu |
++++

 Then, nova db (compute_nodes table) has entries of all bare-metal nodes.
What do you think of this approach.
Do you have any better approach?

 Thanks,
 David



- Original Message -
 To elaborate, something the below. I'm not absolutely sure you need to
 be able to set service_name and host, but this gives you the option to
 do so if needed.
 
 iff --git a/nova/manager.py b/nova/manager.py
 index c6711aa..c0f4669 100644
 --- a/nova/manager.py
 +++ b/nova/manager.py
 @@ -217,6 +217,8 @@ class SchedulerDependentManager(Manager):
 
 def update_service_capabilities(self, capabilities):
 Remember these capabilities to send on next periodic update.
 + if not isinstance(capabilities, list):
 + capabilities = [capabilities]
 self.last_capabilities = capabilities
 
 @periodic_task
 @@ -224,5 +226,8 @@ class SchedulerDependentManager(Manager):
 Pass data back to the scheduler at a periodic interval.
 if self.last_capabilities:
 LOG.debug(_('Notifying Schedulers of capabilities ...'))
 - self.scheduler_rpcapi.update_service_capabilities(context,
 - self.service_name, self.host, self.last_capabilities)
 + for capability_item in self.last_capabilities:
 + name = capability_item.get('service_name', self.service_name)
 + host = capability_item.get('host', self.host)
 + self.scheduler_rpcapi.update_service_capabilities(context,
 + name, host, capability_item)
 
 On Aug 21, 2012, at 1:28 PM, David Kang dk...@isi.edu wrote:
 
 
   Hi Vish,
 
   We are trying to change our code according to your comment.
  I want to ask a question.
 
  a) modify driver.get_host_stats to be able to return a list of
  host
  stats instead of just one. Report the whole list back to the
  scheduler. We could modify the receiving end to accept a list as
  well
  or just make multiple calls to
  self.update_service_capabilities(capabilities)
 
   Modifying driver.get_host_stats to return a list of host stats is
   easy.
  Calling muliple calls to
  self.update_service_capabilities(capabilities) doesn't seem to work,
  because 'capabilities' is overwritten each time.
 
   Modifying the receiving end to accept a list seems to be easy.
  However, 'capabilities' is assumed to be dictionary by all other
  scheduler routines,
  it looks like that we have to change all of them to handle
  'capability' as a list of dictionary.
 
   If my understanding is correct, it would affect many parts of the
   scheduler.
  Is it what you recommended?
 
   Thanks,
   David
 
 
  - Original Message -
  This was an immediate goal, the bare-metal nova-compute node could
  keep an internal database, but report capabilities through nova in
  the
  common way with the changes below. Then the scheduler wouldn't need
  access to the bare metal database at all.
 
  On Aug 15, 2012, at 4:23 PM, David Kang dk...@isi.edu wrote:
 
 
  Hi Vish,
 
  Is this discussion for long-term goal or for this Folsom release?
 
  We still believe that bare-metal database is needed
  because there is not an automated way how bare-metal nodes report
  their capabilities
  to their bare-metal nova-compute node.
 
  Thanks,
  David
 
 
  I am interested in finding a solution

Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-21 Thread David Kang

 Hi Vish,

 We are trying to change our code according to your comment.
I want to ask a question.

  a) modify driver.get_host_stats to be able to return a list of host
  stats instead of just one. Report the whole list back to the
  scheduler. We could modify the receiving end to accept a list as
  well
  or just make multiple calls to
  self.update_service_capabilities(capabilities)

 Modifying driver.get_host_stats to return a list of host stats is easy.
Calling muliple calls to self.update_service_capabilities(capabilities) doesn't 
seem to work,
because 'capabilities' is overwritten each time.

 Modifying the receiving end to accept a list seems to be easy.
However, 'capabilities' is assumed to be dictionary by all other scheduler 
routines,
it looks like that we have to change all of them to handle 'capability' as a 
list of dictionary.

 If my understanding is correct, it would affect many parts of the scheduler.
Is it what you recommended?

 Thanks,
 David
 

- Original Message -
 This was an immediate goal, the bare-metal nova-compute node could
 keep an internal database, but report capabilities through nova in the
 common way with the changes below. Then the scheduler wouldn't need
 access to the bare metal database at all.
 
 On Aug 15, 2012, at 4:23 PM, David Kang dk...@isi.edu wrote:
 
 
  Hi Vish,
 
  Is this discussion for long-term goal or for this Folsom release?
 
  We still believe that bare-metal database is needed
  because there is not an automated way how bare-metal nodes report
  their capabilities
  to their bare-metal nova-compute node.
 
  Thanks,
  David
 
 
  I am interested in finding a solution that enables bare-metal and
  virtualized requests to be serviced through the same scheduler
  where
  the compute_nodes table has a full view of schedulable resources.
  This
  would seem to simplify the end-to-end flow while opening up some
  additional use cases (e.g. dynamic allocation of a node from
  bare-metal to hypervisor and back).
 
  One approach would be to have a proxy running a single nova-compute
  daemon fronting the bare-metal nodes . That nova-compute daemon
  would
  report up many HostState objects (1 per bare-metal node) to become
  entries in the compute_nodes table and accessible through the
  scheduler HostManager object.
 
 
 
 
  The HostState object would set cpu_info, vcpus, member_mb and
  local_gb
  values to be used for scheduling with the hypervisor_host field
  holding the bare-metal machine address (e.g. for IPMI based
  commands)
  and hypervisor_type = NONE. The bare-metal Flavors are created with
  an
  extra_spec of hypervisor_type= NONE and the corresponding
  compute_capabilities_filter would reduce the available hosts to
  those
  bare_metal nodes. The scheduler would need to understand that
  hypervisor_type = NONE means you need an exact fit (or best-fit)
  host
  vs weighting them (perhaps through the multi-scheduler). The
  scheduler
  would cast out the message to the topic.service-hostname (code
  today uses the HostState hostname), with the compute driver having
  to
  understand if it must be serviced elsewhere (but does not break any
  existing implementations since it is 1 to 1).
 
 
 
 
 
  Does this solution seem workable? Anything I missed?
 
  The bare metal driver already is proxying for the other nodes so it
  sounds like we need a couple of things to make this happen:
 
 
  a) modify driver.get_host_stats to be able to return a list of host
  stats instead of just one. Report the whole list back to the
  scheduler. We could modify the receiving end to accept a list as
  well
  or just make multiple calls to
  self.update_service_capabilities(capabilities)
 
 
  b) make a few minor changes to the scheduler to make sure filtering
  still works. Note the changes here may be very helpful:
 
 
  https://review.openstack.org/10327
 
 
  c) we have to make sure that instances launched on those nodes take
  up
  the entire host state somehow. We could probably do this by making
  sure that the instance_type ram, mb, gb etc. matches what the node
  has, but we may want a new boolean field used if those aren't
  sufficient.
 
 
  I This approach seems pretty good. We could potentially get rid of
  the
  shared bare_metal_node table. I guess the only other concern is how
  you populate the capabilities that the bare metal nodes are
  reporting.
  I guess an api extension that rpcs to a baremetal node to add the
  node. Maybe someday this could be autogenerated by the bare metal
  host
  looking in its arp table for dhcp requests! :)
 
 
  Vish
 
  ___
  OpenStack-dev mailing list
  openstack-...@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  openstack-...@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [Openstack] A Step-by-Step Guide to Deploying OpenStack on CentOS Using the KVM Hypervisor and GlusterFS Distributed File System

2012-08-17 Thread David Busby
Hi Anton,

Thanks for this, having a quick read through it looks great.

I'd be interested to know what sort of performance you see with gluster
providing a replicated file system, have you been able to do some high I/O
burn in tests on guests?

Thanks

David



On Fri, Aug 17, 2012 at 9:16 AM, Anton Beloglazov 
anton.belogla...@gmail.com wrote:

 Hi All,

 I and other people from the CLOUDS lab (http://www.cloudbus.org/) have
 just completed writing a step-by-step guide to deploying OpenStack on
 multiple nodes with CentOS 6.3 using KVM and GlusterFS based on our
 experience. Each step is implemented as a separate shell script, which
 allows going slowly to understand every installation step. I thought it
 might be useful for some people; therefore, I'm announcing it in this
 mailing list.

 The guide is available as a PDF:
 https://github.com/beloglazov/openstack-centos-kvm-glusterfs/raw/master/doc/openstack-centos-kvm-glusterfs-guide.pdf

 All the shell scripts are on github:
 https://github.com/beloglazov/openstack-centos-kvm-glusterfs

 Best regards,
 Anton Beloglazov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A Step-by-Step Guide to Deploying OpenStack on CentOS Using the KVM Hypervisor and GlusterFS Distributed File System

2012-08-17 Thread David Busby
Hi Anton,

For a strait gluster vs native; sysbench (
http://sysbench.sourceforge.net/docs/#fileio_mode also available from epel
for el6 http://koji.fedoraproject.org/koji/buildinfo?buildID=262308)  may
be able to show some insight into guest I/O performance; over an extended
period of time.

Potentially you could then look at concurrency by running sysbench on
multiple guests to gauge degradation of performance due to concurrent I/O
across nodes (if any exists), here I'd be particularly curious if high I/O
on one compute node, due to replication caused a performance hit on
another node.

Regards

David



On Fri, Aug 17, 2012 at 9:45 AM, Anton Beloglazov 
anton.belogla...@gmail.com wrote:

 Hi David,

 I haven't had a chance to run any performance tests yet. What kind of
 tests would you suggest?

 Thanks,
 Anton


 On Fri, Aug 17, 2012 at 6:40 PM, David Busby d.bu...@saiweb.co.uk wrote:

 Hi Anton,

 Thanks for this, having a quick read through it looks great.

 I'd be interested to know what sort of performance you see with gluster
 providing a replicated file system, have you been able to do some high I/O
 burn in tests on guests?

 Thanks

 David



 On Fri, Aug 17, 2012 at 9:16 AM, Anton Beloglazov 
 anton.belogla...@gmail.com wrote:

 Hi All,

 I and other people from the CLOUDS lab (http://www.cloudbus.org/) have
 just completed writing a step-by-step guide to deploying OpenStack on
 multiple nodes with CentOS 6.3 using KVM and GlusterFS based on our
 experience. Each step is implemented as a separate shell script, which
 allows going slowly to understand every installation step. I thought it
 might be useful for some people; therefore, I'm announcing it in this
 mailing list.

 The guide is available as a PDF:
 https://github.com/beloglazov/openstack-centos-kvm-glusterfs/raw/master/doc/openstack-centos-kvm-glusterfs-guide.pdf

 All the shell scripts are on github:
 https://github.com/beloglazov/openstack-centos-kvm-glusterfs

 Best regards,
 Anton Beloglazov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-15 Thread David Kang

 
 The bare-metal database includes five tables:

1. bm_nodes   // This is similar to compute_node table
2. bm_deployments   // The status of deployment of bare-metal nodes
3. bm_pxe_ips   // PXE information for bare-metal nodes
4. bm_interfaces   // network information of bare-metal nodes
5. migrate_version  // for database migration

 The information of bare-metal nodes and their status 
are sent to the scheduler as an aggregate capability set of the bare-metal 
machines.
Our current approach is to have a new BaremetalHostManager 
(nova/nova/scheduler/baremetal_host_manager.py)
that caches the information.
BaremetalHostManager gets the information by accessing bare-metal db directly 
for now. (Proposed patch 4)
It works only when there is one shared bare-metal db exists.
But, it looks like that non-shared bare-metal db is preferred.
We will change BaremetalHostManager to use RPC (instead of db access) to access 
those information from multiple bare-metal nova-compute nodes. 

 Thanks,
 David




--
Dr. Dong-In David Kang
Computer Scientist
USC/ISI

- Original Message -
 Can you elaborate what is the purpose of this database?
 If we compare it to KVM support, the 'primary' location of VMs'
 metadata is in libvirt internal store (outside of Nova), and then it
 is cached in Nova DB, for Nova purposes.
 A similar approach might make for bare-metal machines too -- keep
 'primary' metadata store outside of Nova, and a cache in Nova DB.
 
 Regards,
 Alex
 
 
 
 
 From: David Kang dk...@isi.edu
 To: OpenStack Development Mailing List
 openstack-...@lists.openstack.org, openstack@lists.launchpad.net
 (openstack@lists.launchpad.net) openstack@lists.launchpad.net,
 Date: 15/08/2012 06:32 PM
 Subject: [Openstack] Discussion about where to put database for
 bare-metal provisioning (review 10726)
 Sent by: openstack-bounces+glikson=il.ibm@lists.launchpad.net
 
 
 
 
 
 
 Hi,
 
 This is call for discussion about the code review 10726.
 https://review.openstack.org/#/c/10726/
 Mark asked why we implemented a separata database for bare-metal
 provisioning.
 Here we describe our thought.
 We are open to discussion and to the changes that the community
 recommends.
 Please give us your thoughts.
 
 NTT Docomo and USC/ISI have developed bare-metal provisioning.
 We created separate database to describe bare-metal nodes, which
 consists of 5 tables now.
 Our initial implementation assumes the database is not a part of nova
 database.
 In addition to the reasons described in the comments of the code
 review,
 here is another reason we decided a separate database for baremetal
 provisioning.
 
 Bare-metal database is mainly used by bare-metal nova-compute.
 Since bare-metal nova-compute manages multiple bare-metal machines,
 it needs to keep/update the information of bare-metal machines.
 If the bare-metal database is in the main nova db, accessing nova db
 remotely by
 bare-metal nova-compute is inevitable.
 Once Vish told us that shared db access from nova-compute is not
 desirable.
 
 It is possible to make the scheduler do the job of bare-metal
 nova-compute.
 However, it would need a big changes in how the scheduler and a
 nova-compute
 communicates. For example, currently, the scheduler casts an instance
 to a
 nova-compute. But for bare-metal node, the scheduler should cast an
 instance
 to a bare-metal machine through bare-metal nova-compute.
 Bare-metal nova-compute should boot the machine, transfer kernel, fs,
 etc.
 So, bare-metal nova-compute should know the id of bare-metal node and
 other information
 for booting (PXE ip address, ...) and more.
 That information should be sent to bare-metal nova-compute by the
 scheduler.
 
 If frequent access of bare-metal tables in nova db from bare-metal
 nova-compute
 is OK, we are OK to put the bare-metal tables into nova db.
 
 Please let us know your opinions.
 
 Thanks,
 David, Mikyung @ USC/ISI
 
 --
 Dr. Dong-In David Kang
 Computer Scientist
 USC/ISI
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Discussion about where to put database for bare-metal provisioning (review 10726)

2012-08-15 Thread David Kang

 Hi Vish,

 Is this discussion for long-term goal or for this Folsom release?

 We still believe that bare-metal database is needed
because there is not an automated way how bare-metal nodes report their 
capabilities
to their bare-metal nova-compute node. 

 Thanks,
 David
 
 
 I am interested in finding a solution that enables bare-metal and
 virtualized requests to be serviced through the same scheduler where
 the compute_nodes table has a full view of schedulable resources. This
 would seem to simplify the end-to-end flow while opening up some
 additional use cases (e.g. dynamic allocation of a node from
 bare-metal to hypervisor and back).
 
 One approach would be to have a proxy running a single nova-compute
 daemon fronting the bare-metal nodes . That nova-compute daemon would
 report up many HostState objects (1 per bare-metal node) to become
 entries in the compute_nodes table and accessible through the
 scheduler HostManager object.
 
 
 
 
 The HostState object would set cpu_info, vcpus, member_mb and local_gb
 values to be used for scheduling with the hypervisor_host field
 holding the bare-metal machine address (e.g. for IPMI based commands)
 and hypervisor_type = NONE. The bare-metal Flavors are created with an
 extra_spec of hypervisor_type= NONE and the corresponding
 compute_capabilities_filter would reduce the available hosts to those
 bare_metal nodes. The scheduler would need to understand that
 hypervisor_type = NONE means you need an exact fit (or best-fit) host
 vs weighting them (perhaps through the multi-scheduler). The scheduler
 would cast out the message to the topic.service-hostname (code
 today uses the HostState hostname), with the compute driver having to
 understand if it must be serviced elsewhere (but does not break any
 existing implementations since it is 1 to 1).
 
 
 
 
 
 Does this solution seem workable? Anything I missed?
 
 The bare metal driver already is proxying for the other nodes so it
 sounds like we need a couple of things to make this happen:
 
 
 a) modify driver.get_host_stats to be able to return a list of host
 stats instead of just one. Report the whole list back to the
 scheduler. We could modify the receiving end to accept a list as well
 or just make multiple calls to
 self.update_service_capabilities(capabilities)
 
 
 b) make a few minor changes to the scheduler to make sure filtering
 still works. Note the changes here may be very helpful:
 
 
 https://review.openstack.org/10327
 
 
 c) we have to make sure that instances launched on those nodes take up
 the entire host state somehow. We could probably do this by making
 sure that the instance_type ram, mb, gb etc. matches what the node
 has, but we may want a new boolean field used if those aren't
 sufficient.
 
 
 I This approach seems pretty good. We could potentially get rid of the
 shared bare_metal_node table. I guess the only other concern is how
 you populate the capabilities that the bare metal nodes are reporting.
 I guess an api extension that rpcs to a baremetal node to add the
 node. Maybe someday this could be autogenerated by the bare metal host
 looking in its arp table for dhcp requests! :)
 
 
 Vish
 
 ___
 OpenStack-dev mailing list
 openstack-...@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


  1   2   3   >