Re: [Openstack] [openstack] openstack setups at Universities

2018-10-15 Thread Chris Dent

On Wed, 10 Oct 2018, Jay See wrote:


Hai everyone,

May be a different question , not completely related to issues associated
with openstack.

Does anyone know any university or universities using opnstack for cloud
deployment and resource sharing.


Jetstream is OpenStack-based and put together by a consortium of
universities: https://jetstream-cloud.org/


--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Use of CUSTOM Traits

2017-10-10 Thread Chris Dent

On Tue, 10 Oct 2017, Ramu, MohanX wrote:


Please let me know, what is the main purpose of Custom Traits, can we use for 
assigning some value to traits. If yes, how?


You can't assign a value to a trait. The trait is the value. You're
assigning the trait to a resource provider. If it is a custom trait,
you may need to create it first.

You can think of traits like a tag: it describes a qualitative
aspect of the resource provider it is associated with. So while a
disk can have 2000 GB, it either is or is not an SSD.

If you haven't had a chance to read
http://specs.openstack.org/openstack/nova-specs/specs/pike/implemented/resource-provider-traits.html
that may be of some help.

Also: https://developer.openstack.org/api-ref/placement/#traits

So, with custom traits, the idea is that there is some qualitative
"trait" that is specific ("custom") to your environment. One of the
ways this been discussed as being useful is associating a NIC to a
physical network: CUSTOM_PHYSNET1

For more on that see this spec that is under consideration:

https://review.openstack.org/#/c/510244/

it plans to make extensive use of custom traits.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] questions about Openstack Ocata replacement API

2017-03-19 Thread Chris Dent

On Sun, 19 Mar 2017, zhihao wang wrote:


I am trying the new version openstack ocata in Ubuntu 1604

But I got some problem with nova placement API, there is nothing on the 
Openstack Ubuntu Installation Doc


The docs for that are being updated to include the necessary
information. If you at https://review.openstack.org/#/c/438328/
you'll see the new information there. A rendered version will be at
http://docs-draft.openstack.org/28/438328/12/check/gate-openstack-manuals-tox-doc-publish-checkbuild/846ac33//publish-docs/draft/install-guide-ubuntu/nova.html


I already have the endpoint, but it always said there is no placement API 
endpoint  , please see below


As one of the responses on ask.openstack says, depending on which
version of the packages you have you may need to add to your apache2
config something like:


 Require all granted


I believe this has been resolved in newer versions of the packaging.


--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Ceilometer][Architecture] Transformers in Kilo vs Liberty(and Mitaka)

2016-04-12 Thread Chris Dent


This discussion needs to be happening on openstack-dev too, so
cc'ing that list in as well. The top of the thread is at
http://lists.openstack.org/pipermail/openstack/2016-April/015864.html

On Tue, 12 Apr 2016, Chris Dent wrote:


On Tue, 12 Apr 2016, Nadya Shakhat wrote:


   I'd like to discuss one question with you. Perhaps, you remember that
in Liberty we decided to get rid of transformers on polling agents [1]. I'd
like to describe several issues we are facing now because of this decision.
1. pipeline.yaml inconsistency.


The original idea with centralizing the transformers to just the
notification agents was to allow a few different things, only one of
which has happened:

* Make the pollster code less complex with few dependencies,
 easing deployment options (this has happened), maintenance
 and custom pollsters.

 With the transformers in the pollsters they must maintain a
 considerable amount of state that makes effective use of eventlet
 (or whatever the chosen concurrent solution is) more difficult.

 The ideal pollster is just something that spits out a dumb piece
 of identified data every interval. And nothing else.

* Make it far easier to use and conceptualize the use of pollsters
 outside of the ceilometer environment as simple data collectors.
 In that context transformation would occur only close to the data
 consumption not at the data production.

 This, following the good practice of services doing one thing
 well.

* Migrate away from the pipeline.yaml that conflated sources and
 sinks to a model that is good both for computers and humans:

 * sources over here
 * sinks over here

That these other things haven't happened means we're in an awkward
situation.

Are the options the following?

* Do what you suggest and pull transformers back into the pollsters.
 Basically revert the change. I think this is the wrong long term
 solution but might be the best option if there's nobody to do the
 other options.

* Implement a pollster.yaml for use by the pollsters and consider
 pipeline.yaml as the canonical file for the notification agents as
 there's where the actual _pipelines_ are. Somewhere in there kill
 interval as a concept on pipeline side.

 This of course doesn't address the messaging complexity. I admit
 that I don't understand all the issues there but it often feels
 like we are doing that aspect of things completely wrong, so I
 would hope that before we change things there we consider all the
 options.

What else?

One probably crazy idea: What about figuring out the desired end-meters
of common transformations and making them into dedicated pollsters?
Encapsulating that transformation not at the level of the polling
manager but at the individual pollster.




--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Ceilometer][Architecture] Transformers in Kilo vs Liberty(and Mitaka)

2016-04-12 Thread Chris Dent

On Tue, 12 Apr 2016, Nadya Shakhat wrote:


   I'd like to discuss one question with you. Perhaps, you remember that
in Liberty we decided to get rid of transformers on polling agents [1]. I'd
like to describe several issues we are facing now because of this decision.
1. pipeline.yaml inconsistency.


The original idea with centralizing the transformers to just the
notification agents was to allow a few different things, only one of
which has happened:

* Make the pollster code less complex with few dependencies,
  easing deployment options (this has happened), maintenance
  and custom pollsters.

  With the transformers in the pollsters they must maintain a
  considerable amount of state that makes effective use of eventlet
  (or whatever the chosen concurrent solution is) more difficult.

  The ideal pollster is just something that spits out a dumb piece
  of identified data every interval. And nothing else.

* Make it far easier to use and conceptualize the use of pollsters
  outside of the ceilometer environment as simple data collectors.
  In that context transformation would occur only close to the data
  consumption not at the data production.

  This, following the good practice of services doing one thing
  well.

* Migrate away from the pipeline.yaml that conflated sources and
  sinks to a model that is good both for computers and humans:

  * sources over here
  * sinks over here

That these other things haven't happened means we're in an awkward
situation.

Are the options the following?

* Do what you suggest and pull transformers back into the pollsters.
  Basically revert the change. I think this is the wrong long term
  solution but might be the best option if there's nobody to do the
  other options.

* Implement a pollster.yaml for use by the pollsters and consider
  pipeline.yaml as the canonical file for the notification agents as
  there's where the actual _pipelines_ are. Somewhere in there kill
  interval as a concept on pipeline side.

  This of course doesn't address the messaging complexity. I admit
  that I don't understand all the issues there but it often feels
  like we are doing that aspect of things completely wrong, so I
  would hope that before we change things there we consider all the
  options.

What else?

One probably crazy idea: What about figuring out the desired end-meters
of common transformations and making them into dedicated pollsters?
Encapsulating that transformation not at the level of the polling
manager but at the individual pollster.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] What is the best approach to implement a state-aware monitoring for OpenStack?

2016-03-10 Thread Chris Dent

On Thu, 10 Mar 2016, Jorge Cardoso (Cloud Operations and Analytics, IT R&D 
Division) wrote:


Will it make sense to change nova/api/openstack/wsgi.py (and other
APIs) and track REST calls by sending notification messages to
RabbitMQ when a REST request is received?


If doing this by notification message is what you wanted then instead
of changing project code it would be better to add WSGI middleware
via paste.ini or other configuration or via code in a load balancer
or proxy that fronts the services.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Adding New Ceilometer Meters

2015-06-08 Thread Chris Dent

On Sun, 7 Jun 2015, Girija Sharan wrote:


This explain me the entire design of how it should be and it requires lot
of work to be done.
But why can't ceilometer itself has pollsters for collecting physical
Compute nodes related details just like it does for instances.


One option for polling for information from the physical compute nodes
is to use snmp. I was a bit confused on how this worked myself, so
wrote something up a while ago:

https://tank.peermore.com/tanks/cdent-rhat/SnmpWithCeilometer

Does this help you at all?


1). I need to develop a monitoring tool for my infrastructure, which
will show me the live CPU usage, Memory usage, Disk usage, and Network
traffic of instances launched on Compute nodes as well as of physical
Compute nodes.


The snmp polling ought to get you the physical nodes, and the
compute-agent polling (which uses libvirt or other virt inspector)
will get you the instance information.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Ceilometer] v2 pecan api with uwsgi

2015-06-04 Thread Chris Dent

On Mon, 4 May 2015, Sam Morrison wrote:


We have upgraded our ceilometer setup to Juno and we can no longer get it 
working under UWSGI.

The reason it does’t work is that it looks like the V2 pecan API doesn’t 
support auth and so there is no keystone middleware.

Has anyone got ceilometer working behind uwsgi with keystone support?


As promised, I've finally gotten around to testing this and doing
some poking around. The short answer is I think you might have a 
a bad /etc/ceilometer/api_paste.ini, related to this change


   https://review.openstack.org/#/c/102353/

Another option is that your api_paste.ini isn't getting read for
some reason.

The longer answer is "works for me" but I'm using master. Here's how
I tested it:

Made a devstack with ceilometer-api running in the usual fashion and
tested it as follows (checkceilo.yaml is attached, gabbi is here[1]):

$ . openrc admin admin
$ TOKEN=$(openstack token issue -c id -f value)
$ # confirm auth behaves as expected
$ cat /tmp/checkceilo.yaml |sed -e "s/\$TOKEN/${TOKEN}/g" | \
gabbi-run localhost:8777
$ # kill ceilometer-api in whatever fashion \
(depends on how you sarted it)
$ # start uwsgi standalone of ceilo wsgi ap
$ uwsgi --http-socket :8777 --plugin python \
--wsgi-file /var/www/ceilometer/app
$ # confirm again
$ cat /tmp/checkceilo.yaml |sed -e "s/\$TOKEN/${TOKEN}/g" | \
gabbi-run localhost:8777

For me that all works as expected. If your uwsgi setup (if, for
example you are using emperor) is changing users or changing working dirs
I wouldn't be surprised if it is losing track of where the
ceilometer config is. However this ought to cause an error so an out
of date paste file seems more likely.

If none of that gets it then let me know more about your setup so I
can duplicate it more correctly because the configuration you're
using ought to work (and is definitely the best choice if you have
the option, in my experience) and if it doesn't we should fix it.

[1] http://gabbi.readthedocs.org/en/latest/
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent
tests:

- name: no auth
  url: /v2/meters
  status: 401

- name: bad auth
  url: /v2/meters
  request_headers:
  x-auth-token: 0123456789abcde0123456789abcde01
  status: 401

- name: good auth
  url: /v2/meters
  request_headers:
  x-auth-token: $TOKEN
  response_headers:
  content-type: /application/json/
  status: 200
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] 答复: 答复: Ceilometer high availability in active-active

2015-03-18 Thread Chris Dent

On Wed, 18 Mar 2015, Pan, Fengyun wrote:


2015-03-18 18:48:05.948 16236 TRACE ceilometer.coordination 
ToozConnectionError: Error 113 connecting to 193.168.196.246:6379. EHOSTUNREACH.


This suggests that there is either no route between your controller
and compute node, or there is a firewall (probably on the compute
node) that doesn't allow access to port 6379 from remote hosts.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] 答复: 答复: Ceilometer high availability in active-active

2015-03-12 Thread Chris Dent

On Thu, 12 Mar 2015, Pan, Fengyun wrote:


When I use "redis://localhost:6379", find other problem as follow:



2015-03-12 18:19:19.515 20863 TRACE ceilometer.openstack.common.threadgroup 
ImportError: No module named concurrent


Make sure you have installed the latest versions of tooz, the redis
python client, and other relevant python packages.

If it still doesn't work after that you'll need to share more
information about how you are doing your install and your setup.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] 答复: Ceilometer high availability in active-active

2015-03-11 Thread Chris Dent

On Wed, 11 Mar 2015, Pan, Fengyun wrote:


We kown that:
backend_url',
  default=None,
  help='The backend URL to use for distributed coordination. If '
   'left empty, per-deployment central agent and per-host '
   'compute agent won\'t do workload '
   'partitioning and will only function correctly if a '
   'single instance of that service is running.'),
But how to set the ‘backend_url’?


This appears to be an oversight in the documentation. The main
starting point is here:

   
http://docs.openstack.org/admin-guide-cloud/content/section_telemetry-cetral-compute-agent-ha.html

but nothing there nor what it links to actually says what should go as
the value of the setting. It's entirely dependent on the backend being
used and how that backend is being configured. Each of the tooz
drivers has some information on some of the options, but again, it is
not fully documented yet.

For reference, what I use in my own testing is redis as follows:

   redis://localhost:6379

This uses a single redis server, so introduces another single point of
failure. It's possible to use sentinel to improve upon this situation:

   http://docs.openstack.org/developer/tooz/developers.html#redis

The other drivers work in similar ways with their own unique
arguments.

I'm sorry I'm not able to point to more complete information but I can
say that it is in the process of being improved.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Ceilometer high availability in active-active

2015-03-10 Thread Chris Dent

On Tue, 10 Mar 2015, Vijaya Bhaskar wrote:


Do we need detailed configuring like the workload partitioning (allocation
of polling resources to individual agents). Or just mention the backend_url
with tooz setup? and rest of the thins taken care of automatically.


The partitioning is automatic. Setting backend_url in configuration
will "turn on" the feature. The backend will need to be set up, but
for many of them it's just a matter of turning the service (redis,
memcached, whatever) on and making it reachable.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Ceilometer high availability in active-active

2015-03-09 Thread Chris Dent

On Sat, 7 Mar 2015, Vijaya Bhaskar wrote:


I know it is possible to have all the ceilometer services in
active-active, however the ceilometer-agent-central service is run
only in active-passive as far as I have researched. What are the
consequences of running multiple ceilometer-agent-central
services(that is in active-active). If there are serious consequences,
is there any way to run it in active-active mode.


I'm not sure if I've quite grasped the gist of your inquiry, but:

Since Juno, multiple instances of the central polling agent can be run,
each polling a partition of the resources that get polled. Each
agent does discovery of resource on each polling cycle and then does
only some of them based on the partitioning. The partitioning is
coordinated via tooz[1] though group membership. Depending on the
driver used, tooz itself can be highly available.

The upshot of this is that you can distribute N central agents
across a bunch of machines and they will coordinate to each do a
subset of resources. Each agent runs a heartbeat, if it fails to
send a heartbeat, group membership will be managed, and the
remaining agents will pick up the slack. When the failed agent
rejoins the group, grouping will be adjusted.

I've played with this a fair bit and it is quite cool.

The compute-agent and impi-agent can do this too, although it makes
a bit less sense.

In Kilo all three agent types have been merged under an umbrella
which supports namespaces: ipmi, central, compute. Each running
agent can support one, some or all namespaces, each one using
coordinated partitioning.

I hope that's useful.

[1] http://docs.openstack.org/developer/tooz/
--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack