[openstack-dev] [monasca] Anomaly & Prediction Engine

2016-01-22 Thread Osanai, Hisashi

Monasca folks,

We discussed about Anomaly & Prediction Engine in this week's
irc meeting and decided we would exchange info using this list.
I'm really interested in having this functionality but the status
is prototype now.

We know that there are a lot of related tech around it and the tech
has been growing rapidly.

Let's start to discuss about how to approach this. What do you think?

Best Regards,
Hisashi Osanai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] info in paste will be removed?

2015-08-28 Thread Osanai, Hisashi

Folks,

I would like to know whether info in http://paste.openstack.org will be removed 
or not.
If it will be removed, I also would like to know a condition.

Thanks in advance,
Hisashi Osanai


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] info in paste will be removed?

2015-08-28 Thread Osanai, Hisashi

On Friday, August 28, 2015 8:49 PM, Jeremy Stanley wrote:

 We (the project infrastructure root sysadmins) don't expire/purge
 the content on paste.openstack.org, though have deleted individual
 pastes on request if someone reports material which is abusive or
 potentially illegal in many jurisdictions.

Thanks for the quick response. This behavior is what I wanted to have :-)

Thanks again!
Hisashi Osanai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] RBAC Policy Basics

2015-06-23 Thread Osanai, Hisashi

On Tuesday, June 23, 2015 10:30 PM, Adam Young wrote:

 OK, I think I get it;  you want to make a check specific to the roles
 on the service token.  The term Service roles  confused me.
 
 You can do this check with oslo.messaging today.  Don't uyse the role
 check, just a generic check.
 It looks for an elelement in a collection, and reeturns true if it is
 in there;  see
 
 
 http://git.openstack.org/cgit/openstack/oslo.policy/commit/?id=a08bc
 79f5c117696c43feb2e44c4dc5fd0013deb

Cool! This is what I wanted to have. :-)

Thanks!
Hisashi Oasnai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] RBAC Policy Basics

2015-06-23 Thread Osanai, Hisashi

On Tuesday, June 23, 2015 12:14 AM, Adam Young wrote:

 It is not an issue if you keep each of the policy files completely
 separate, but it means that each service has its own meaning for the
 same name, and that confuses operators;  owner in Nova means a user
 that has a role on this project where as owner in Keystone means
 Objects associated with a specific user.

I understand your thought came from usability.

But it might increase development complexity, I think each component
doesn't want to define own component name in the policy.json because
it's well-known there.
Unnn... Please forget it (it might be too development thought) :-)

I want to focus on the following topic:

  My concern now is:
  * Service Tokens was implemented in Juno [1] but now we are not able
to implement it with Oslo policy without extensions so far.
  * I think to implement spec[2] needs more time.
 
  [1] 
  https://github.com/openstack/keystone-specs/blob/master/specs/keystonemiddleware/implemented/service-tokens.rst
  [2] https://review.openstack.org/#/c/133855/
 
  Is there any way to support spec[1] in Oslo policy? Or
  Should I wait for spec[2]?
 
 I'm sorry, I am not sure what you are asking.

I'm sorry let me explain this again.

(1) Keystone supports service tokens [1] from Juno release.
(2) Oslo policy graduated from Kilo release.
(3) Oslo policy doesn't have an ability to deal with the service tokens.
I'm not 100% sure but in order to support the service tokens Oslo policy
needs to handle service_roles in addition to roles stored in a credential.
Current logic:
If a rule which starts with 'role:', RoleCheck uses 'roles' in the 
credential.
code: 
https://github.com/openstack/oslo.policy/blob/master/oslo_policy/_checks.py#L249

My solution for this now is create ServiceRoleCheck class to handle 
'service_roles' in
the credential. This check will be used when a rule starts with 'srole:'.

https://review.openstack.org/#/c/149930/15/swift/common/middleware/keystoneauth.py
L759-L767

I think it's better to handle by Oslo policy because of a common issue. So I 
would like
to know a plan to handle this issue.

Thanks in advance,
Hisashi Osanai


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] RBAC Policy Basics

2015-06-21 Thread Osanai, Hisashi

On Saturday, June 20, 2015 11:16 AM, Adam Young wrote: 
  What situations does a shared policy file require?
  For example, there are policy files for Nova and Cinder and they have
  same targets such as
  context_is_admin, admin_or_owner and default.
 
 A lot of these internal rules most likely should  be removed.  They do
 conflict, with differenet interpretations between the proejcts. They are
 also confusing two different things:  scope and role./  I think we
 should make it a point to keep them separate.

I don't understand why you think it as conflicts. They use same target name
such as context_is_admin, admin_or_owner and default but they use them
on different processes. I might have mis-understanding here but for me there
is no conflict.

  http://lists.openstack.org/pipermail/openstack-dev/2015-May/063915.html
  - HTTP_X_SERVICE_ROLES handling in _checks.py
 
 I've missed there there was another  push for Service specif roles out
 there.  We've been trying to make the concpet slighly more general by
 saying that we were going to namespace roles, and that a Service would
 be one potential namwspacing.  Henry Nash had proposed Domain Specific
 roles, in case you were wondering what else would need to be namespaced.
 
 https://review.openstack.org/#/c/133855/

I like your thought  the concpet slighly more general and it becomes a
solution for my issue.

My concern now is:
* Service Tokens was implemented in Juno [1] but now we are not able to
  Implement it with Oslo policy without extensions so far. 
* I think to implement spec[2] needs more time.

[1] 
https://github.com/openstack/keystone-specs/blob/master/specs/keystonemiddleware/implemented/service-tokens.rst
[2] https://review.openstack.org/#/c/133855/

Is there any way to support spec[1] in Oslo policy? Or
Should I wait for spec[2]?

Thanks in advance,
Hisashi Osanai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] RBAC Policy Basics

2015-06-18 Thread Osanai, Hisashi

Adam,

Thank you for the information RBAC Policy Basics.

Thursday, June 18, 2015 1:47 AM, Adam Young wrote:
 However, we have found a need to have a global override.  This is a way a 
 cloud admin that can go into any API anywhere and fix things.
 This means that Glance, Neutron, Nova, and Keystone should be able to share a 
 policy file.

What situations does a shared policy file require?

For example, there are policy files for Nova and Cinder and they have same 
targets such as
context_is_admin, admin_or_owner and default.

(1) load both policy.json files on a server process then the targets will be 
overridden by 2nd loaded policy.json.
A cloud admin changes the 2nd policy.json only.
(2) A cloud admin changes the targets in different policy.json files at one 
time.

Did you mention about case(2)? 

Nova:   https://github.com/openstack/nova/blob/master/etc/nova/policy.json
Cinder: https://github.com/openstack/cinder/blob/master/etc/cinder/policy.json

context_is_admin: role:admin,
admin_or_owner:  is_admin:True or project_id:%(project_id)s,
default: rule:admin_or_owner,

BTW, I sent the following email in this list. I think I found right person who
can answer my question? :-)

http://lists.openstack.org/pipermail/openstack-dev/2015-May/063915.html
- HTTP_X_SERVICE_ROLES handling in _checks.py

Thanks in advance,
Hisashi Osanai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.policy] service_roles checks in oslo.policy

2015-05-12 Thread Osanai, Hisashi

Oslo.policy folks,

I have been developing Swift's RBAC using oslo.policy[1]. It is necessary to 
check for
service_roles(HTTP_X_SERVICE_ROLES)[2] in this patch. Current implementation 
looks if
rule string starts with 'role', check the string whether the string is in 
'roles' of
the credential.
https://github.com/openstack/oslo.policy/blob/master/oslo_policy/_checks.py#L244

I think service_roles should be in the credential as same as the roles so I 
need to have
new Check class for the service_roles. 
I was wondering if you have a plan to extend it for the service_roles.

So far, I implemented ServiceRoleCheck class keystoneauth.py#L757 in [1] but 
it's better
to be in oslo.policy.

[1] https://review.openstack.org/#/c/149930/
[2] 
https://github.com/openstack/keystone-specs/blob/master/specs/keystonemiddleware/implemented/service-tokens.rst

Thanks in advance,
Hisashi Osanai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.policy] guraduation status

2015-03-04 Thread Osanai, Hisashi

Doug,

Thank you for the response and sorry to respond you late.
Recently I could not receive e-mails from this list and your e-mail was one of 
them.
I don't know the reason but I found out your response in archive.

On Mon, 02 Mar 2015 12:28:06 -0800, Doug Hellmann wrote:
 We're making good progress and expect to have a public release with a
 stable API fairly soon.

Good information! I'm looking forward to using it.

Thanks again!
Hisashi Osanai

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] a way of checking replicate completion on swift cluster

2014-12-04 Thread Osanai, Hisashi

Thank you for the response.
I updated the following patch with the idea.

https://review.openstack.org/#/c/138342/

On Friday, December 05, 2014 5:50 AM, Clay Gerrard wrote:
 more fidelity in the recon's seems fine, statsd emissions are 
 also a popular target for telemetry radiation.

Thanks again!
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] a way of checking replicate completion on swift cluster

2014-11-27 Thread Osanai, Hisashi

Hi,

I think it is a good idea to have the object-replicator's failure info 
in recon like the other replicators.

I think the following info can be added in object-replicator in addition to 
object_replication_last and object_replication_time.

If there is any technical reason to not add them, I can make it. What do 
you think?

{
replication_last: 1416334368.60865,
replication_stats: {
attempted: 13346,
empty: 0,
failure: 870,
failure_nodes: {192.168.0.1: 3,
  192.168.0.2: 860,
  192.168.0.3: 7},
hashmatch: 0,
remove: 0,
start: 1416354240.9761429,
success: 1908
ts_repl: 0
},
replication_time: 2316.5563162644703,
object_replication_last: 1416334368.60865,
object_replication_time: 2316.5563162644703
}

Cheers,
Hisashi Osanai

On Tuesday, November 25, 2014 4:37 PM, Matsuda, Kenichiro 
[mailto:matsuda_keni...@jp.fujitsu.com] wrote:
 I understood that the logs are necessary to judge whether no failure on
 object-replicator.
 And also, I thought that the recon info of object-replicator having failure
 (just like the recon info of account-replicator and container-replicator)
 is useful.
 Are there any reason to not included failure in recon?

On Tuesday, November 25, 2014 5:53 AM, Clay Gerrard 
[mailto:clay.gerr...@gmail.com] wrote:
  replication logs

On Friday, November 21, 2014 4:22 AM, Clay Gerrard 
[mailto:clay.gerr...@gmail.com] wrote:
 You might check if the swift-recon tool has the data you're looking for.  It 
 can report 
 the last completed replication pass time across nodes in the ring.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Allow hostname for nodes in Ring

2014-10-15 Thread Osanai, Hisashi

Thanks for your advice.

On Thursday, October 16, 2014 2:25 AM, Pete Zaitcev wrote:
 I don't know if the bug report is all that necessary or useful.
 The scope of the problem is well defined without, IMHO.

I really want to have clear rules for this but your thought looks 
pretty nice for me (good balance) so I will behave according to it.

Thanks again!
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Allow hostname for nodes in Ring

2014-10-14 Thread Osanai, Hisashi

Swift folks,

Could you please advise me about the following email?

Thanks in advance,
Hisashi Osanai

 -Original Message-
 From: Osanai, Hisashi [mailto:osanai.hisa...@jp.fujitsu.com]
 Sent: Friday, October 10, 2014 1:57 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [swift] Allow hostname for nodes in Ring
 
 
 Hi Swift folks,
 
 Today the following patch was abandoned and I contacted with the author,
 so I would like to take it over if nobody else is chafing to take it.
 Is it OK?
 
 https://review.openstack.org/#/c/80421/
 
 If it is OK, I will proceed it with following procedure.
 (1) Open new bug report (there is no bug report for this)
 I'm not sure that I should write a BP instead of a bug report.
 (2) Make a patch based on the current patch on gerrit
 
 Cheers,
 Hisashi Osanai
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Allow hostname for nodes in Ring

2014-10-14 Thread Osanai, Hisashi

Hi Matthew,

Thanks for the quick response.

On Wednesday, October 15, 2014 2:31 PM, Matthew Oliver wrote:
   - Continue where this one left off, in which case pull down the change from 
 gerrit
 and start working on it. But if you do this make sure you add a 
 'Co-Authored-By: name
 n...@example.com' to attribute the work that was already done by the 
 original author*.
 And start working on it in a new change.

I discussed this with the original author and we would choose above.

Thanks again!
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Allow hostname for nodes in Ring

2014-10-09 Thread Osanai, Hisashi

Hi Swift folks,

Today the following patch was abandoned and I contacted with the author, 
so I would like to take it over if nobody else is chafing to take it.
Is it OK?

https://review.openstack.org/#/c/80421/

If it is OK, I will proceed it with following procedure.
(1) Open new bug report (there is no bug report for this)
I'm not sure that I should write a BP instead of a bug report.
(2) Make a patch based on the current patch on gerrit

Cheers,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [doc][swift] improvement for selinux related procedure

2014-10-06 Thread Osanai, Hisashi

Hi, 

I think that the document OPENSTACK INSTALLATION GUIDE FOR RED HAT 
ENTERPRISE LINUX, ... has been written based on selinux because 
the openstack-selinux package is installed along with the in following 
procedure.

http://docs.openstack.org/icehouse/install-guide/install/yum/content/basics-packages.html
  + OpenStack packages

And the following document there is the mount procedure for /srv/node/sdb1 
without specifying the context 
information(context=system_u:object_r:swift_data_t:s0).
http://docs.openstack.org/icehouse/install-guide/install/yum/content/installing-and-configuring-storage-nodes.html

I think it is better to add the context information on the document. What do 
you think?
If you need a bug report for this, please let me know.

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] minimum python support version for juno

2014-10-01 Thread Osanai, Hisashi

Hi,

I would like to know the minimum python support version for juno.
I checked the following memo. My understanding is python 2.6 support will be 
supported in juno and also dropped before kilo so it will be dropped in 
one of stable releases in juno. Is this correct understanding?

https://etherpad.openstack.org/p/juno-cross-project-future-of-python

Want to drop 2.6 ASAP, currently blocked on SLES confirmation that 2.6 is no 
longer needed
Declare intent that it will definitely go away by K (for services)
Make sure that every *python module* (dependencies, and not only core projects) 
that 
we maintain declare non-support in 2.6 if they stop supporting it

Cheers,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] minimum python support version for juno

2014-10-01 Thread Osanai, Hisashi

Thank you for the quick responses.

 On 10/1/2014 4:24 AM, Ihar Hrachyshka wrote:
  All stable Juno releases will support Python 2.6. All Kilo releases
  are expected to drop Python 2.6 support.

On Wednesday, October 01, 2014 11:28 PM, Matt Riedemann wrote:
 Right, and backports could be interesting...but we have to move on at
 some point.

Yeah, I have a concerns if we meet troubles when we use python 2.6 that has a 
bug.

But I understand our direction.

Thanks again!
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] step ahead regarding swift middleware related topic

2014-09-25 Thread Osanai, Hisashi

Hi Ceilometer Folks,

I would like to step ahead regarding the following two topic.

(1) Backporting an important fix to Icehouse
I think that this fix is really important and works OK.
Could you please review and approve it?
https://review.openstack.org/#/c/112806/

(2) Repackage the ceilometer and the ceilometerclient packages
I wrote this BP and I'm ready to set to this. Could you please 
review it? 
https://review.openstack.org/#/c/117745/

I registered this BP on specs/juno but it should be changed 
to kilo.

Thanks in advance,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-22 Thread Osanai, Hisashi

On Friday, August 22, 2014 2:55 PM, Dean Troyer wrote:
 As one data point, the keystone middleware (auth_token) was just recently 
 moved out of keystoneclient 
 and into its own repo, partially because it had dependencies that otherwise 
 were not required for 
 pure client installations. 

Thank you for this info. I understand that pure client installations are 
required for future deployment 
so I need to take care of it for a spec. 
(https://github.com/openstack/keystonemiddleware)

 I don't know what your middleware dependencies are, but I think it would be 
 good to consider the 
 effect that move would have on client-only installations.

We are talking about the swift middleware (swift_middleware) that is only for 
the swift proxy so 
it is better to have own repo.

Cheers,
Hisashi Osanai
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-22 Thread Osanai, Hisashi

On Friday, August 22, 2014 4:15 PM, Nejc Saje wrote:
 The modules you are talking about are part of Ceilometer's core
 functionality, we can't move them to a completely separate code-tree
 that is meant only for client functionality.

Thank you for the explanation! I understand your point of the real problem.

 Besides the conceptual difference, python-ceilometerclient is not
 tightly coupled with Ceilometer and has its own release schedule among
 other things.

I checked the requirement.txt in the ceilometer package and saw the line of 
python-ceilometerclient so we may have a chance to control the level of 
ceilometerclient when the ceilometer released.

Cheers,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-21 Thread Osanai, Hisashi

Thank you for your quick response.

On Thursday, August 21, 2014 3:12 PM, Nejc Saje wrote:
 I don't think there's any way the modules you mention in the BP can be
 moved into ceilometerclient. I think the best approach to resolve this
 would be to rewrite swift middleware to use oslo.messaging
 notifications, as discussed here:
 http://lists.openstack.org/pipermail/openstack-dev/2014-July/041628.
 html

I understand your point that solve almost unnecessary dependencies. I would 
like 
to make sure that remained the dependencies of context and timeutils after 
rewriting.
Does the rewriting include removing the dependencies?

=== copy from the BP ===
- swift_middleware.py
61 from ceilometer.openstack.common import context
62 from ceilometer.openstack.common import timeutils
63 from ceilometer import pipeline
64 from ceilometer import sample
65 from ceilometer import service

On the other hand, I'm really interested in the mail thread you pointed out:D
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg30880.html

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-21 Thread Osanai, Hisashi

Hi, 

The main purpose of the BP is 
move swift_middleware.py in the ceilometer package to the ceilometerclinet 
package.

In order to achieve this moving, we need to solve dependencies 
which the swift_middleware.py has.

We have the following two ideas to remove the dependencies:
(1) rewrite swift_middleware with oslo.messaging
http://lists.openstack.org/pipermail/openstack-dev/2014-July/041628.html
(2) move modules which has the dependencies to ceilometerclient
I wrote this idea in the BP. And you pointed out this approach is not 
possible.

I would like to realize moving swift_middleware.py from the ceilometer package 
to 
the ceilometerclient package. For me it is very difficult to convince users of 
installing the ceilometer package on Proxy Nodes for just using the swift 
middleware 
because of maintenance costs. Operators in users must check security patches 
for 
installed packages on Proxy Nodes even if these are not used on the nodes.

I think that both ideas for removing the dependencies realize the purpose and 
also understand your thought is a way which the ceilometer spec. is going to.
Here I asked you the following minor question.

On Thursday, August 21, 2014 3:59 PM, Osanai, Hisashi wrote:
 I understand your point that solve almost unnecessary dependencies. I would 
 like
 to make sure that remained the dependencies of context and timeutils after 
 rewriting.
 Does the rewriting include removing the dependencies?

On Thursday, August 21, 2014 3:12 PM, Nejc Saje wrote:
  I don't think there's any way the modules you mention in the BP can be
  moved into ceilometerclient.

But I'm not sure the real problem to move the modules. My understanding is 
- the ceilometer package has dependency with ceilometerclient so it is easy to 
  move them
- all callers for using the moved modules must change paths.

If the above approach can work, we can proceed this BP and rewriting 
swift_middleware 
with oslo.messaging separately.

I will take a bit strong stand on moving the swift_middleware.py from the 
ceilometer 
package to the ceilometerclient package but for how to remove the dependencies 
I will 
take a middle-of-the-road position.:)

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-21 Thread Osanai, Hisashi

On Friday, August 22, 2014 1:14 PM, Gordon chung wrote:
 could you create a spec[1] and we can maybe hash out idea there.
 
 [1]https://github.com/openstack/ceilometer-specs

Thank you for your response.
I will create a spec for this. 

Thank you very much!
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] repackage ceilometer and ceilometerclient

2014-08-20 Thread Osanai, Hisashi

Folks,

I wrote the following BP regarding repackaging ceilometer and ceilometerclient.

https://blueprints.launchpad.net/ceilometer/+spec/repackaging-ceilometerclient

I need to install the ceilometer package when the swift_middlware middleware 
uses.
And the ceilometer package has dependencies with the following:

- requirements.txt in the ceilometer package
...
python-ceilometerclient=1.0.6
python-glanceclient=0.13.1
python-keystoneclient=0.9.0
python-neutronclient=2.3.5,3
python-novaclient=2.17.0
python-swiftclient=2.0.2
...

From maintenance point of view, these dependencies are undesirable. What do 
you think?

# To fix this we need to touch some repos so I wrote the BP instead of a bug 
report.

Best Regards,
Hisashi Osanai



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] backport fixes to old branches

2014-08-17 Thread Osanai, Hisashi

On Friday, August 15, 2014 8:48 PM, Ihar Hrachyshka wrote:
 There was an issue with jenkins running py33 checks for stable
 ceilometer branches, which is wrong. Should be fixed now.

Thank you for your response.
I couldn't solve this by myself but Dina Belova and Julien Danjou 
solved this issue with:
https://review.openstack.org/#/c/113842/

Best Regards,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Osanai, Hisashi

On Wednesday, August 13, 2014 5:03 PM, Julien Danjou wrote:
 This is not a problem in tox.ini, this is a problem in the
 infrastructure config. Removing py33 from the envlist in tox.ini isn't
 going to fix anything unforunately.

Thank you for your quick response.

I may misunderstand this topic. Let me clarify ...
My understanding is:
- the py33 failed because there is a problem that the happybase-0.8 cannot 
  work with python33 env. (execfile function calls on python33 doesn't work)
- the happybase is NOT an OpenStack component.
- the py33 doesn't need to execute on stable/icehouse

One idea to solve this problem is:
If the py33 doesn't need to execute on stable/icehouse, just eliminate the py33.

 This is not a problem in tox.ini, 
Means the py33 needs to execute on stable/icehouse. Here I misunderstand 
something...

 this is a problem in the infrastructure config.
Means execfile function calls on python33 in happybase is a problem. If my 
understanding 
is correct, I agree with you and I think this is the direct cause of this 
problem.

Your idea to solve this is creating a patch for the direct cause, right?

Thanks in advance,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-13 Thread Osanai, Hisashi

On Wed, Aug 13, 2014 at 2:35 PM, Julien Danjou  wrote:
 Means the py33 needs to execute on stable/icehouse. Here I misunderstand 
 something...
 Not it does not, that line in tox.ini is not use by the gate.

 this is a problem in the infrastructure config.
 Means execfile function calls on python33 in happybase is a problem. If my 
 understanding
 is correct, I agree with you and I think this is the direct cause of this 
 problem.

 Your idea to solve this is creating a patch for the direct cause, right?
 My idea to solve this is to create a patch on
 http://git.openstack.org/cgit/openstack-infra/config/
 to exclude py33 on the stable/icehouse branch of Ceilometer in the gate.

Sorry to use your time for explanation above again and thanks for it. I'm happy 
to have 
clear understanding your thought.

On Wednesday, August 13, 2014 7:54 PM, Dina Belova wrote:
 Here it is: https://review.openstack.org/#/c/113842/
Thank you for providing the fix. I surprised the speed for it. it's really 
fast...

Thanks again!
Hisashi Osanai
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] tox -epy26 failed because of insufficient test environment

2014-08-12 Thread Osanai, Hisashi

Hi,

I got an error message when Jenkins executed tox -epy26 in the following fix.
https://review.openstack.org/#/c/112771/

I think that the reason of the error message is a mongod isn't installed in 
test 
environment. (it works in my test env)

Do you have any idea to solve this?

- setup-test-env.sh
 16 export PATH=${PATH:+$PATH:}/sbin:/usr/sbin
 17 if ! which mongod /dev/null 21
 18 then
 19 echo Could not find mongod command 12
 20 exit 1
 21 fi

- console.log
2014-08-12 07:25:03.329 | + tox -epy26
2014-08-12 07:25:03.542 | py26 create: 
/home/jenkins/workspace/gate-ceilometer-python26/.tox/py26
2014-08-12 07:25:05.255 | py26 installdeps: 
-r/home/jenkins/workspace/gate-ceilometer-python26/requirements.txt, 
-r/home/jenkins/workspace/gate-ceilometer-python26/test-requirements.txt
2014-08-12 07:28:01.581 | py26 develop-inst: 
/home/jenkins/workspace/gate-ceilometer-python26
2014-08-12 07:28:07.861 | py26 runtests: commands[0] | bash -x 
/home/jenkins/workspace/gate-ceilometer-python26/setup-test-env.sh python 
setup.py testr --slowest --testr-args=
2014-08-12 07:28:07.864 | + set -e
2014-08-12 07:28:07.865 | ++ mktemp -d CEILO-MONGODB-X
2014-08-12 07:28:07.866 | + MONGO_DATA=CEILO-MONGODB-t6f5p
2014-08-12 07:28:07.866 | + MONGO_PORT=29000
2014-08-12 07:28:07.866 | + trap clean_exit EXIT
2014-08-12 07:28:07.867 | + mkfifo CEILO-MONGODB-t6f5p/out
2014-08-12 07:28:07.868 | + export 
PATH=/home/jenkins/workspace/gate-ceilometer-python26/.tox/py26/bin:/usr/local/bin:/bin:/usr/bin:/sbin:/usr/sbin
2014-08-12 07:28:07.869 | + 
PATH=/home/jenkins/workspace/gate-ceilometer-python26/.tox/py26/bin:/usr/local/bin:/bin:/usr/bin:/sbin:/usr/sbin
2014-08-12 07:28:07.869 | + which mongod
2014-08-12 07:28:07.870 | + echo 'Could not find mongod command'
2014-08-12 07:28:07.870 | Could not find mongod command
2014-08-12 07:28:07.871 | + exit 1
2014-08-12 07:28:07.871 | + clean_exit
2014-08-12 07:28:07.872 | + local error_code=1
2014-08-12 07:28:07.872 | + rm -rf CEILO-MONGODB-t6f5p
2014-08-12 07:28:07.873 | ++ jobs -p
2014-08-12 07:28:07.873 | + kill
2014-08-12 07:28:07.874 | kill: usage: kill [-s sigspec | -n signum | -sigspec] 
pid | jobspec ... or kill -l [sigspec]
2014-08-12 07:28:07.875 | ERROR: InvocationError: '/bin/bash -x 
/home/jenkins/workspace/gate-ceilometer-python26/setup-test-env.sh python 
setup.py testr --slowest --testr-args='

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] tox -epy26 failed because of insufficient test environment

2014-08-12 Thread Osanai, Hisashi

On Tuesday, August 12, 2014 7:05 PM, Dina Belova wrote:
 that is blocking the Ceilometer gate at all for now.

Thank you for your quick response.
I understand current situation.

Thanks again!
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] gate-ceilometer-python33 failed because of wrong setup.py in happybase

2014-08-12 Thread Osanai, Hisashi

On Tuesday, August 12, 2014 10:14 PM, Julien Danjou wrote:
 The py33 gate shouldn't be activated for the stable/icehouse. I'm no
 infra-config expert, but we should be able to patch it for that (hint?).

Thank you for the response. 

Now we have two choices:
(1) deter to activate the py33 gate
(2) a patch to happybase

I prefer to choose (1) first because (2) is only problem if we activate the 
py33 gate in stable/icehouse together with python33 and as you mentioned 
the py33 gate shouldn't be activated in stable/icehouse but there is the entry 
for the py33 gate in tox.ini so I would like to remove it from stable/icehouse.

If it's ok, I make a bug report for tox.ini in stable/icehouse and commit a fix 
for it.  (then proceed https://review.openstack.org/#/c/112806/ ahead)

What do you think?

- tox.ini (stable/icehouse)
  1 [tox]
  2 minversion = 1.6
  3 skipsdist = True
  4 envlist = py26,py27,py33,pep8

Best Regards,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ceilometer] [ft] Improving ceil.objectstore.swift_middleware

2014-08-10 Thread Osanai, Hisashi

On Friday, August 08, 2014 9:20 PM, Chris Dent wrote:

 These may not be directly what you want, but are something worth
 tracking as you explore and think.

Thank you for your help.

I will brush up my thought (shift to pollster) with the fixes which 
you pointed out.

Thanks again!
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] backport fixes to old branches

2014-08-08 Thread Osanai, Hisashi

Hi,

On Tuesday, August 05, 2014 8:57 PM, Ihar Hrachyshka wrote:
  Thanks. To facilitate quicker backport, you may also propose the patch
  for review yourself. It may take time before stable maintainers or
  other interested parties get to the bug and do cherry-pick.

I did cherry-pick for https://bugs.launchpad.net/ceilometer/+bug/1326250; and 
executed git review (https://review.openstack.org/#/c/112806/).

In review phase I got the error message from Jenkins.
The reason of the error is happybase-0.8 (latest one) uses execfile function 
and 
the usage of the function is removed from python.

The happybase is not OpenStack components so I would like to have advices for 
how to deal with this. 

- console.html
2014-08-08 09:17:45.901 | Downloading/unpacking happybase=0.5,!=0.7 (from -r 
/home/jenkins/workspace/gate-ceilometer-python33/requirements.txt (line 7))
2014-08-08 09:17:45.901 |   http://pypi.openstack.org/simple/happybase/ uses an 
insecure transport scheme (http). Consider using https if pypi.openstack.org 
has it available
2014-08-08 09:17:45.901 |   Storing download in cache at 
./.tox/_download/http%3A%2F%2Fpypi.openstack.org%2Fpackages%2Fsource%2Fh%2Fhappybase%2Fhappybase-0.8.tar.gz
2014-08-08 09:17:45.901 |   Running setup.py 
(path:/home/jenkins/workspace/gate-ceilometer-python33/.tox/py33/build/happybase/setup.py)
 egg_info for package happybase
2014-08-08 09:17:45.902 | Traceback (most recent call last):
2014-08-08 09:17:45.902 |   File string, line 17, in module
2014-08-08 09:17:45.902 |   File 
/home/jenkins/workspace/gate-ceilometer-python33/.tox/py33/build/happybase/setup.py,
 line 5, in module
2014-08-08 09:17:45.902 | execfile('happybase/_version.py')
2014-08-08 09:17:45.902 | NameError: name 'execfile' is not defined
2014-08-08 09:17:45.902 | Complete output from command python setup.py 
egg_info:
2014-08-08 09:17:45.902 | Traceback (most recent call last):
2014-08-08 09:17:45.902 | 
2014-08-08 09:17:45.902 |   File string, line 17, in module
2014-08-08 09:17:45.902 | 
2014-08-08 09:17:45.902 |   File 
/home/jenkins/workspace/gate-ceilometer-python33/.tox/py33/build/happybase/setup.py,
 line 5, in module
2014-08-08 09:17:45.903 | 
2014-08-08 09:17:45.903 | execfile('happybase/_version.py')
2014-08-08 09:17:45.903 | 
2014-08-08 09:17:45.903 | NameError: name 'execfile' is not defined

- happybase-0.8/setup.py
1 from os.path import join, dirname
2 from setuptools import find_packages, setup
3
4 __version__ = None
5 execfile('happybase/_version.py')

- python's doc
https://docs.python.org/3.3/library/2to3.html?highlight=execfile#2to3fixer-execfile

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [swift] Improving ceilometer.objectstore.swift_middleware

2014-08-08 Thread Osanai, Hisashi

Hi,

Is there any way to proceed ahead the following topic?

Best Regards,
Hisashi Osanai

On Friday, August 01, 2014 7:32 PM, Hisashi Osanai wrote:
 I would like to follow this discussion so I picked up points.
 
 - There are two way to collect info from swift, one is pollster and
   the other is notification. And we discussed about how to solve the
   performance degradation of swift_middleware here.
   pollster:
- storage.objects
- storage.objects.size
- storage.objects.containers
- storage.containers.objects
- storage.containers.objects.size
   notification:
- storage.objects.incoming.bytes
- storage.objects.outgoing.bytes
- storage.api.request
 
 - storage.objects.imcoming.bytes, storage.objects.outgoing.bytes and
   storage.api.request are handled with swift_middleware because
 ceilometer
   need to have the info with per-user and per-tenant basis.
 - swift has statsd but there is no per-user and per-tenant related info
   because to realize this swift has to have keystone-isms into core swift
 code.
 - improves swift_middleware with stopping the 1:1 mapping b/w API calls
 and
   notifications
 - swift may consume 10s of thousands of event per second and this case
 is fairly
   unique so far.
 
 I would like to think this performance problem with the following point
 of view.
 - need to handle 10s of thousands of event per second
 - possibility to lost events (i.e. swift proxy goes down when events queued
 in a swift process)
 
 With the notification style there are restriction for above points.
 Therefore I change the style
 to get storage.objects.imcoming.bytes, storage.objects.outgoing.bytes
 and
 storage.api.request from notification to pollster.
 Here I met a problem that pointed out by Mr. Merritt, swift has dependency
 with keystone.
 But I prefer to solve this problem rather than a problem for notification
 style. What do you think?
 
 My rough idea to solve the dependency problem is
 - enable statsd (or similar function) in swift
 - put a middleware in swift proxy
 - this middleware does not have any communication with ceilometer but
   put a mark to followed middleware or swift proxy
 - store metrics with a tenant and a user by statsd if there is the mark
   store metrics by statsd if there is no mark
 - Ceilometer (central agent) call APIs to get the metrics
 
 Is there any way to solve the dependency problem?
 
 Best Regards,
 Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] backport fixes to old branches

2014-08-06 Thread Osanai, Hisashi

On Tuesday, August 05, 2014 8:57 PM, Ihar Hrachyshka wrote:
 
 Thanks. To facilitate quicker backport, you may also propose the patch
 for review yourself. It may take time before stable maintainers or
 other interested parties get to the bug and do cherry-pick.

Thank you for your advice.
I would like to confirm the procedure for backporting. The procedure is just 
using 
same Change-Id (in last paragraph) in addition to Normal Workflow, right?

Is there any other points that I should take care of?

Thanks in advance,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] backport fixes to old branches

2014-08-05 Thread Osanai, Hisashi

Hi,

I would like to have the following fix for IceHouse branch because 
the problem happens on it but the fix was committed on Juno-2 only.
Is there any process to backport fixes to old branches?

https://bugs.launchpad.net/ceilometer/+bug/1326250

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] backport fixes to old branches

2014-08-05 Thread Osanai, Hisashi

Thank you for your quick response.

I don't have enough rights for nominating the bug so 
I put the tag icehouse-backport-potential instead.

https://bugs.launchpad.net/ceilometer/+bug/1326250

On Tuesday, August 05, 2014 6:35 PM, Ihar Hrachyshka wrote:
 https://wiki.openstack.org/wiki/StableBranch

Best Regards,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [swift] Improving ceilometer.objectstore.swift_middleware

2014-08-01 Thread Osanai, Hisashi

I would like to follow this discussion so I picked up points.

- There are two way to collect info from swift, one is pollster and 
  the other is notification. And we discussed about how to solve the 
  performance degradation of swift_middleware here. 
  pollster:
   - storage.objects
   - storage.objects.size
   - storage.objects.containers
   - storage.containers.objects
   - storage.containers.objects.size
  notification:
   - storage.objects.incoming.bytes
   - storage.objects.outgoing.bytes
   - storage.api.request

- storage.objects.imcoming.bytes, storage.objects.outgoing.bytes and 
  storage.api.request are handled with swift_middleware because ceilometer 
  need to have the info with per-user and per-tenant basis.
- swift has statsd but there is no per-user and per-tenant related info 
  because to realize this swift has to have keystone-isms into core swift code.
- improves swift_middleware with stopping the 1:1 mapping b/w API calls and 
  notifications
- swift may consume 10s of thousands of event per second and this case is 
fairly 
  unique so far.

I would like to think this performance problem with the following point of view.
- need to handle 10s of thousands of event per second
- possibility to lost events (i.e. swift proxy goes down when events queued in 
a swift process)

With the notification style there are restriction for above points. Therefore I 
change the style 
to get storage.objects.imcoming.bytes, storage.objects.outgoing.bytes and 
storage.api.request from notification to pollster.
Here I met a problem that pointed out by Mr. Merritt, swift has dependency with 
keystone.
But I prefer to solve this problem rather than a problem for notification 
style. What do you think?

My rough idea to solve the dependency problem is 
- enable statsd (or similar function) in swift
- put a middleware in swift proxy
- this middleware does not have any communication with ceilometer but 
  put a mark to followed middleware or swift proxy
- store metrics with a tenant and a user by statsd if there is the mark
  store metrics by statsd if there is no mark
- Ceilometer (central agent) call APIs to get the metrics

Is there any way to solve the dependency problem?

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of ip

2014-07-23 Thread Osanai, Hisashi

I would like to discuss this topic more deeply.

I understand we need to prepare DNS systems and add a lot of operational 
complexity and burden to use the DNS system when we use FQDN in Ring files.

However I think most datacenter have DNS systems to manage network resources 
such as ip addresses and hostnames and it is centralized management.
And you already pointed out that we can get benefit to use FQDN in Ring files 
with some scenarios. 

A scenarios: Corruption of a storage node

IP case:
One storage node corrupted when swift uses IPs in Ring files. An operator 
removes 
the node from swift system using ring-builder command and keeping the node for 
further investigation. Then the operator tries to add new storage node with 
different ip address. In this case swift rebalance all objects.

FQDN case:
One storage node corrupted when swift uses FQDN in Ring files. An operator 
prepares 
new storage node with difference ip address then changes info in DNS systems 
with 
the ip address. In this case swift copy objects that related to the node.

If above understanding is true, it is better to have ability for using FQDN in 
Ring 
files in addition to ip addresses. What do you think?

On Thursday, July 24, 2014 12:55 AM, John Dickinson wrote:

 However, note that until now, we've intentionally kept it as just IP
 addresses since using hostnames adds a lot of operational complexity and
 burden. I realize that hostnames may be preferred in some cases, but this
 places a very large strain on DNS systems. So basically, it's a question
 of do we add the feature, knowing that most people who use it will in
 fact be making their lives more difficult, or do we keep it out, knowing
 that we won't be serving those who actually require the feature.

Best Regards,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of ip

2014-07-23 Thread Osanai, Hisashi

Thank you for the quick response.

On Thursday, July 24, 2014 12:51 PM, John Dickinson wrote:

 you can actually do the same today
 with the IP-based system. You can use the set_info command of
 swift-ring-builder to change the IP for existing devices and this avoids
 any rebalancing in the cluster.

Thanks for the info. 
I will check the set_info command of swift-ring-builder.

My understanding now is 
- in the FQDN case, an operator has to do DNS related operation. (no whole 
rebalancing)
- in the IP case, an operator has to execute swift's command. (no whole 
rebalancing)

I think that the point of this discussion is swift's independency in case of 
failure 
and adding a lot of operational complexity and burden.

I think that recovery procedure in the FQDN case is common one so it is 
better to have the ability for using FQDN in addition to ip addresses.
What do you think of this?

+--+--+---+
|  | In the FQDN case | In the IP case|
+--+--+---+
|Swift's independency  |completely independent|rely on DNS systems|
+--+--+---+
|Operational complexity| (1)  | (2)   |
|(recovery process)| simple   | a bit complex |
+--+--+---+
|Operational complexity| DNS and Swift| Swift only|
|(necessary skills)|  |   |
+--+--+---+

(1) in the FQDN case, change DNS info for the node. (no swift related operation)
(2) in the IP case, execute the swift-ring-builder command on a node then copy 
it to 
all related nodes.

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of ip

2014-07-23 Thread Osanai, Hisashi

Thank you for the clarification.

I understand and agree with your thought it's clear enough.
Thank you for your time and I highly appreciate your responses.

Best Regards,
Hisashi Osanai


On Thursday, July 24, 2014 2:16 PM, John Dickinson wrote:

 Oh I totally agree with what you are saying. A DNS change may be lower
 cost than running Swift config/management commands. At the very least,
 ops already know how to do DNS updates, regardless of it's cost, where
 they have to learn how to do Swift management.
 
 I was simply adding clarity to the trickiness of the situation. As I said
 originally, it's a balance of offering a feature that has a known cost
 (DNS lookups in a large cluster) vs not offering it and potentially making
 some management more difficult. I don't think either solution is all that
 great, but in the absence of a decision, we've so-far defaulted to less
 code has less bugs and not yet written or merged it.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone/swift] role-based access cotrol in swift

2014-07-21 Thread Osanai, Hisashi

Hi,

Thank you for the info.

On Monday, July 21, 2014 10:19 PM, Nassim Babaci wrote:

 * Adding policy engine support to Swift
 https://review.openstack.org/#/c/89568/
With the commit message in 89568, you have developed same function 
except supporting policy.json file format.

 My answer is may be a little bite late but here's a swift middleware we
 have just published: https://github.com/cloudwatt/swiftpolicy
 It is based on the keystoneauth middleware, and uses oslo.policy file
 format.
I would like to know the following points. Do you have info for them?
- difference b/w policy.json file format and oslo.policy file format
- relationship b/w  https://review.openstack.org/#/c/89568/; and 
  https://github.com/cloudwatt/swiftpolicy;

Best Regards,
Hisashi Osanai
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone/swift] role-based access cotrol in swift

2014-07-11 Thread Osanai, Hisashi

John,

Thank you for your quick response.

On Friday, July 11, 2014 12:33 PM John Dickinson m...@not.mn wrote:

 Some of the above may be in line with what you're looking for.

They are the one what I'm looking for. 
First I will look at the codes of policy engine whether I can use it.

Thanks again,
Hisashi Oasnai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone/swift] role-based access cotrol in swift

2014-07-10 Thread Osanai, Hisashi

Hi, 

I looked for info about role-based access control in swift because 
I would like to prohibit PUT operations to containers like create 
containers and set ACLs.

Other services like Nova, Cinder have policy.json file but Swift doesn't.
And I found out the following info.
- Swift ACL's migration
- Centralized policy management

Do you have detail info for above?

http://dolphm.com/openstack-juno-design-summit-outcomes-for-keystone/
---
Migrate Swift ACL's from a highly flexible Tenant ID/Name basis, which worked 
reasonably well against Identity API v2, to strictly be based on v3 Project 
IDs. The driving requirement here is that Project Names are no longer globally 
unique in v3, as they're only unique within a top-level domain.
---
Centralized policy management
Keystone currently provides an unused /v3/policies API that can be used to 
centralize policy blob management across OpenStack.


Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] add checking daemons existence in Healthcheck middleware

2014-07-07 Thread Osanai, Hisashi

Hi,

Current Healthcheck middleware provides the functionality of monitoring Servers 
such as 
Proxy Server, Object Server, Container Server, Container Server and Account 
Server. The 
middleware checks whether each Servers can handle request/response. 
My idea to enhance this middleware is 
checking daemons such replications, updaters and auditors existence in addition 
to current one. 
If we realize this, the scope of Health would be extended from 
 a Server can handle request to a Server and daemons can work appropriately.

http://docs.openstack.org/developer/swift/icehouse/middleware.html?highlight=health#healthcheck

What do you think?

Best Regards,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] add checking daemons existence in Healthcheck middleware

2014-07-07 Thread Osanai, Hisashi

John,

Thank you for your response.

I checked out the doc of swift-recon and that function is 
exactly the one what I want to have. 

# Sorry, my checking is not enough...

Thanks again,
Hisashi Osanai

 -Original Message-
 From: John Dickinson [mailto:m...@not.mn]
 Sent: Tuesday, July 08, 2014 11:59 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [swift] add checking daemons existence in
 Healthcheck middleware
 
 In general, you're right. It's pretty important to know what's going on
 in the cluster. However, the checks for these background daemons shouldn't
 be done in the wsgi servers. Generally, we've stayed away from a lot of
 process monitoring in the Swift core. That it, Swift already works around
 failures, and there is already existing ops tooling to monitor if a process
 is alive.
 
 Check out the swift-recon tool that's included with Swift. It already
 includes some checks like the replication cycle time. While it's not a
 direct is this process alive monitoring tool, it does give good
 information about the health of the cluster.
 
 If you've got some other ideas on checks to add to recon or ways to make
 it better or perhaps even some different ways to integrate monitoring
 systems, let us know!
 
 --John
 
 
 
 On Jul 7, 2014, at 7:33 PM, Osanai, Hisashi
 osanai.hisa...@jp.fujitsu.com wrote:
 
 
  Hi,
 
  Current Healthcheck middleware provides the functionality of monitoring
 Servers such as
  Proxy Server, Object Server, Container Server, Container Server and
 Account Server. The
  middleware checks whether each Servers can handle request/response.
  My idea to enhance this middleware is
  checking daemons such replications, updaters and auditors existence
 in addition to current one.
  If we realize this, the scope of Health would be extended from
  a Server can handle request to a Server and daemons can work
 appropriately.
 
 
 http://docs.openstack.org/developer/swift/icehouse/middleware.html?h
 ighlight=health#healthcheck
 
  What do you think?
 
  Best Regards,
  Hisashi Osanai
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift: reason for using xfs on devices

2014-07-02 Thread Osanai, Hisashi

On Wednesday, July 02, 2014 1:06 PM, Pete Zaitcev zait...@redhat.com wrote:

Thanks for the detailed explanation.

Let me clarify the behavior of swift.

(1) Use ext4 on devices.
(2) Corrupt the data on (1)'s filesystem
(3) Move corrupt files to lost+found without a trace by ext4's fsck
(4) Cannot recognize (3) by Swift's auditors so hashes.pkl is not updated.

Is above sequence correct?
If it's correct, I understand we better to use xfs.

Thanks in advance,
Hisashi Osanai

 -Original Message-
 From: Pete Zaitcev [mailto:zait...@redhat.com]
 Sent: Wednesday, July 02, 2014 1:06 PM
 To: Osanai, Hisashi/小山内 尚
 Cc: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] Swift: reason for using xfs on devices
 
 On Wed, 2 Jul 2014 00:16:42 +
 Osanai, Hisashi osanai.hisa...@jp.fujitsu.com wrote:
 
  So I think if performance of swift is more important rather than
 scalability of it, it is a
  good idea to use ext4.
 
 The real problem is what happens when your drives corrupt the data.
 Both ext4 and XFS demonstrated good resilience, but XFS leaves empty
 files in directories where corrupt files were, while ext4's fsck moves
 them to lost+found without a trace. When that happens, Swift's auditors
 cannot know that something was amiss and the replication is not
 triggered (because hash lists are only updated by auditors).
 
 Mr. You Yamagata worked on a patch to address this problem, but did
 not complete it. See here:
  https://review.openstack.org/11452
 
 -- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift: reason for using xfs on devices

2014-07-01 Thread Osanai, Hisashi

Hi,

In the following document, there is a setup up procedure for storage and 
it seems that swift recommends to use xfs.

http://docs.openstack.org/icehouse/install-guide/install/yum/content/installing-and-configuring-storage-nodes.html
===
2. For each device on the node that you want to use for storage, set up the 
XFS volume (/dev/sdb is used as an example). Use a single partition per drive. 
For example, in a server with 12 disks you may use one or two disks for the
 operating system which should not be touched in this step. The other 10 or 11 
disks should be partitioned with a single partition, then formatted in XFS.
===

I would like to know the reason why swift recommends xfs rather than ext4?

I think ext4 has reasonable performance and can support 1EiB from design point 
of view.
# The max file system size of ext4 is not enough??? 

Thanks in advance,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift: reason for using xfs on devices

2014-07-01 Thread Osanai, Hisashi

On Tuesday, July 01, 2014 9:44 PM, Anne Gentle a...@openstack.org wrote:

Thank you for the quick response.

 The install guide only recommends a single path, not many options, to ensure 
 success.

I understand the point for writing the document.

 There's a little bit of discussion in the developer docs:
 http://docs.openstack.org/developer/swift/deployment_guide.html#filesystem-considerations
 I think that packstack gives the option of using xfs or ext4, so there must 
 be sufficient testing for ext4.

Thank you for this info.
In the discussion, there is a following sentence.
  After thorough testing with our use cases and hardware configurations, XFS 
was the best 
all-around choice.

I would like to know what kind of testing should I do from filesystem point of 
view?

Backgroup of this question is:
I read the following performance comparison about ext4 and xfs. There are some 
results of 
Benchmark. It seems that performance of ext4 is better than xfs (Eric Whitney's 
FFSB testing).
So I think if performance of swift is more important rather than scalability of 
it, it is a
good idea to use ext4.

http://www.linuxtag.org/2013/fileadmin/www.linuxtag.org/slides/Heinz_Mauelshagen_-_Which_filesystem_should_I_use_.e204.pdf

Best Regards,
Hisashi Osanai

From: Anne Gentle [mailto:a...@openstack.org] 
Sent: Tuesday, July 01, 2014 9:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Swift: reason for using xfs on devices


On Tue, Jul 1, 2014 at 6:21 AM, Osanai, Hisashi osanai.hisa...@jp.fujitsu.com 
wrote:

Hi,

In the following document, there is a setup up procedure for storage and
it seems that swift recommends to use xfs.

http://docs.openstack.org/icehouse/install-guide/install/yum/content/installing-and-configuring-storage-nodes.html
===
2. For each device on the node that you want to use for storage, set up the
XFS volume (/dev/sdb is used as an example). Use a single partition per drive.
For example, in a server with 12 disks you may use one or two disks for the
 operating system which should not be touched in this step. The other 10 or 11
disks should be partitioned with a single partition, then formatted in XFS.
===

I would like to know the reason why swift recommends xfs rather than ext4?

The install guide only recommends a single path, not many options, to ensure 
success.

There's a little bit of discussion in the developer docs:
http://docs.openstack.org/developer/swift/deployment_guide.html#filesystem-considerations

I think that packstack gives the option of using xfs or ext4, so there must be 
sufficient testing for ext4. 

Anne
 

I think ext4 has reasonable performance and can support 1EiB from design point 
of view.
# The max file system size of ext4 is not enough???

Thanks in advance,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev