Re: [openstack-dev] Who is using nova-docker? (Re: [nova-docker] Status update)

2015-05-14 Thread Abel Lopez
Again, a conversation that should include the ops list.

On Wed, May 13, 2015 at 6:28 AM, Adrian Otto adrian.o...@rackspace.com
wrote:

  Solum uses it in our Vagrant setup. It makes the dev environment perform
 very nicely, and is compatible with the Docker containers Solum generates.



  Sent from my Verizon Wireless 4G LTE smartphone


  Original message 
 From: John Griffith john.griffi...@gmail.com
 Date: 05/12/2015 9:42 PM (GMT-08:00)
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] Who is using nova-docker? (Re: [nova-docker]
 Status update)



 On Tue, May 12, 2015 at 12:09 PM, Fawad Khaliq fa...@plumgrid.com wrote:


 On Mon, May 11, 2015 at 7:14 PM, Davanum Srinivas dava...@gmail.com
 wrote:

 Good points, Dan and John.

 At this point it may be useful to see who is actually using
 nova-docker. Can folks who are using any version of nova-docker,
 please speak up with a short description of their use case?

  I am using Kilo in a multi-hypervisor mode, some applications running
 on Docker containers and some backend work provisioned as VMs.


 Thanks,
 dims

 On Mon, May 11, 2015 at 10:06 AM, Dan Smith d...@danplanet.com wrote:
  +1 Agreed nested containers are a thing. Its a great reason to keep
  our LXC driver.
 
  I don't think that's a reason we should keep our LXC driver, because
 you
  can still run containers in containers with other things. If anything,
  using a nova vm-like container to run application-like containers
 inside
  them is going to beg the need to tweak more detailed things on the
  vm-like container to avoid restricting the application one, I think.
 
  IMHO, the reason to keep the seldom-used, not-that-useful LXC driver in
  nova is because it's nearly free. It is the libvirt driver with a few
  conditionals to handle different things when necessary for LXC. The
  docker driver is a whole other nova driver to maintain, with even less
  applicability to being a system container (IMHO).
 
  I am keen we set the right expectations here. If you want to treat
  docker containers like VMs, thats OK.
 
  I guess a remaining concern is the driver dropping into diss-repair
  if most folks end up using Magnum when they want to use docker.
 
  I think this is likely the case and I'd like to avoid getting into this
  situation again. IMHO, this is not our target audience, it's very much
  not free to just put it into the tree because meh, some people might
  like it instead of the libvirt-lxc driver.
 
  --Dan
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: https://twitter.com/dims


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ​I'm using nova-docker, started out as just learning the pieces...
 ended up being useful for some internal test and dev work on my side.
 Including building/shipping apps to customers that I used to send out as
 qcows.  I could certainly do this without nova-docker and go straight to
 docker, but this is more fun, and fits with my existing workflow/usage of
 OpenStack for various things.

  Also FWIW Magnum currently is way overkill for what I'm doing, just like
 some other projects.​  I do plan to check that out before long, but for now
 it's a bit overkill for me.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] how to delete a volume which is in error_deleting state

2015-01-05 Thread Abel Lopez
Also important to note, you should check the 'provider_location' column for
the volumes in error_deleting, otherwise you may have the space potentially
allocated on the backend, but cinder thinking it's deleted.
I also like to update 'deleted_at' to NOW

On Mon, Jan 5, 2015 at 10:41 AM, Erlon Cruz sombra...@gmail.com wrote:

 This is usually is related to mis-configuration on the backend driver. For
 example, if you create a volume, shutdown the driver to change some
 configuration, the backend driver can get confused while trying to delete
 the volume or even can't be able to locate the volume in the storage array.
 S

 On Mon, Jan 5, 2015 at 3:35 AM, Eli Qiao ta...@linux.vnet.ibm.com wrote:


 在 2015年01月05日 13:02, Punith S 写道:

 Hi Eli,

  you have to log-in to MySQL cinder database , and try deleting the
 required volume from the volumes table using the id.
 if it fails due to foreign key constraints in volume metadata table, try
 deleting the corresponding volume metadata and then try to delete the
 required volume row.

   hi Punith, I did as your suggestion, it works. but is that reasonable
 not to delete a error_deleting even that status keeping for a quite long
 time?
 thanks.

  thanks

 On Mon, Jan 5, 2015 at 7:22 AM, Eli Qiao ta...@linux.vnet.ibm.com
 wrote:


 hi all,
 how to delete a cinder volume which is in error_deleting status ?
 I don't find force delete options in 'cinder delete',  then how we fix
 it if we got such situation ?
 [tagett@stack-01 devstack]$ cinder list

 +--++-+--+-+--+--+
 |  ID  | Status | Name|
 Size | Volume Type | Bootable | Attached to  |

 +--++-+--+-+--+--+
 | 3e0acd0a-f28f-4fe3-b6e9-e65d5c40740b | in-use | with_cirros |
 4   | lvmdriver-1 |   true   | 428f0235-be54-462f-8916-f32965d42e63 |
 | 7039c683-2341-4dd7-a947-e35941245ec4 | error_deleting | None|
 4   | lvmdriver-1 |  false   |  |
 | d576773f-6865-4959-ba26-13602ed32e89 | error_deleting | None|
 4   | lvmdriver-1 |  false   |  |

 +--++-+--+-+--+--+
 [tagett@stack-01 devstack]$ cinder delete
 7039c683-2341-4dd7-a947-e35941245ec4
 Delete for volume 7039c683-2341-4dd7-a947-e35941245ec4 failed: Bad
 Request (HTTP 400) (Request-ID: req-e4d8cdd9-6ed5-4a7f-81de-7f38f2163d33)
 ERROR: Unable to delete any of specified volumes.

 --
 Thanks,
 Eli (Li Yong) Qiao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
  regards,

  punith s
 cloudbyte.com


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Thanks,
 Eli (Li Yong) Qiao


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] database hoarding

2014-10-30 Thread Abel Lopez
It seems that every release, there is more and more emphasis on upgradability. 
This is a good thing, I've love to see production users easily go from old to 
new.

As an operator, I've seen first hand the results of neglecting the databases 
that openstack creates. If we intend to have users go year-over-year with 
upgrades, we're going to expect them to carry huge databases around. 

Just in my lab, I have over 10 deleted instances in the last two months.
Frankly, I'm not convinced that simply archiving deleted rows is the best idea. 
Sure, gets your production databases and tables to a manageable size, but 
you're simply hoarding old data.

As an operator, I'd prefer to have time based criteria over number of rows, too.
I envision something like `nova-manage db purge [days]` where we can leave it 
up to the administrator to decide how much of their old data (if any) they'd be 
OK losing.

Think about data destruction guidelines too, some companies require old data be 
destroyed when not needed, others require maintaining it. 
We can easily provide both here.

I've drafted a simple blueprint 
https://blueprints.launchpad.net/nova/+spec/database-purge

I've love some input from the community.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] anyone using RabbitMQ with active/active mirrored queues?

2014-09-11 Thread Abel Lopez
Yes, not sure why the HA guide says that.
Only problems I've run into was around cluster upgrades. If you're running
3.2+ you'll likely have a better experience.

List ha_queues in all your configs, list all your rabbit hosts (I don't use
a VIP as heartbeats weren't working when I did this)

On Wednesday, September 10, 2014, Chris Friesen chris.frie...@windriver.com
wrote:

 Hi,

 I see that the OpenStack high availability guide is still recommending the
 active/standby method of configuring RabbitMQ.

 Has anyone tried using active/active with mirrored queues as recommended
 by the RabbitMQ developers?  If so, what problems did you run into?

 Thanks,
 Chris

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev