[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-27 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153344804
  
--- Diff: aria/parser/loading/uri.py ---
@@ -44,6 +45,7 @@ def __init__(self, context, location, 
origin_location=None):
 self.location = location
 self._prefixes = StrictList(value_class=basestring)
 self._loader = None
+self._canonical_location = None
--- End diff --

I added documentation in `loader.py` (the base class).


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-27 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153343795
  
--- Diff: aria/parser/reading/reader.py ---
@@ -28,16 +28,9 @@ def __init__(self, context, location, loader):
 
 def load(self):
 with OpenClose(self.loader) as loader:
--- End diff --

Because `__enter__` and `__exit__` are magic functions, and it seems very 
awkward to me to have users call them directly. There are not supposed to be 
called directly. However, in this case, I do think open and close should be 
part of the class's protocol.

As for using a mixin ... seems far more awkward to me than using a helper 
class. I still suggest leaving this as is.

There's no new code here, too -- it's what we had for a long time.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-27 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153343290
  
--- Diff: aria/utils/threading.py ---
@@ -161,11 +242,7 @@ def close(self):
 self._workers = None
 
 def drain(self):
-"""
-Blocks until all current tasks finish execution, but leaves the 
worker threads alive.
-"""
-
-self._tasks.join()  # oddly, the API does not support a timeout 
parameter
+self._tasks.join() # oddly, the API does not support a timeout 
parameter
--- End diff --

Sure, but for better or for worse it's what we have right now. I did enough 
sweeping fixes in this PR to get people mad at me, I suggest postponing this 
one for a different PR.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153016106
  
--- Diff: aria/modeling/service_common.py ---
@@ -587,12 +588,11 @@ class MetadataBase(TemplateModelMixin):
 :ivar name: name
 :vartype name: basestring
 :ivar value: value
-:vartype value: basestring
 """
 
 __tablename__ = 'metadata'
 
-value = Column(Text)
+value = Column(PickleType)
--- End diff --

Why not? :)


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153014611
  
--- Diff: aria/parser/reading/yaml.py ---
@@ -82,7 +84,11 @@ def read(self):
 # see issue here:
 # 
https://bitbucket.org/ruamel/yaml/issues/61/roundtriploader-causes-exceptions-with
 #yaml_loader = yaml.RoundTripLoader(data)
-yaml_loader = yaml.SafeLoader(data)
+try:
+# Faster C-based loader, might not be available on all 
platforms
+yaml_loader = yaml.CSafeLoader(data)
+except BaseException:
--- End diff --

I think I'm sure ... I want the failover to always work. Even if 
CSafeLoader does exist but fails somehow internally, we should still make an 
effort to run.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153014666
  
--- Diff: aria/utils/threading.py ---
@@ -161,11 +242,7 @@ def close(self):
 self._workers = None
 
 def drain(self):
-"""
-Blocks until all current tasks finish execution, but leaves the 
worker threads alive.
-"""
-
-self._tasks.join()  # oddly, the API does not support a timeout 
parameter
+self._tasks.join() # oddly, the API does not support a timeout 
parameter
--- End diff --

Hm, I always hated this, and a lot of projects ignore it. Right now we have 
a mix of styles.

Let's put this as a separate JIRA and maybe change all our comments at once?


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153014418
  
--- Diff: aria/parser/reading/reader.py ---
@@ -28,16 +28,9 @@ def __init__(self, context, location, loader):
 
 def load(self):
 with OpenClose(self.loader) as loader:
--- End diff --

If you can refactor this to make it use context manager and still be clear, 
please show me.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153014351
  
--- Diff: aria/parser/presentation/presentation.py ---
@@ -199,6 +199,9 @@ class Presentation(PresentationBase):
 """
 
 def _validate(self, context):
+if (not 
context.presentation.configuration.get('validate_normative', True)) \
+and self._get_extension('normative'):
+return
--- End diff --

That user should never be using our ARIA `_extensions` and setting 
`normative: true`. These `_extensions` are entirely for internal use by our 
project. If they really want to use this feature, it is up to them to make sure 
the types are valid. (We test for this as part of our tox test suite by 
disabling this flag.)


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153014247
  
--- Diff: aria/parser/presentation/presentation.py ---
@@ -199,6 +199,9 @@ class Presentation(PresentationBase):
 """
 
 def _validate(self, context):
+if (not 
context.presentation.configuration.get('validate_normative', True)) \
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153014161
  
--- Diff: aria/parser/presentation/field_validators.py ---
@@ -14,12 +14,29 @@
 # limitations under the License.
 
 
+from ...utils.formatting import safe_repr
 from ..validation import Issue
 from .utils import (parse_types_dict_names, report_issue_for_unknown_type,
 report_issue_for_parent_is_self, 
report_issue_for_unknown_parent_type,
 report_issue_for_circular_type_hierarchy)
 
 
+def not_negative_validator(field, presentation, context):
--- End diff --

Python will do the right thing here -- if there is a `__lt__` it will call 
it, if not it will try to find a `__gt__` and `__eq__` and do the right stuff.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013980
  
--- Diff: aria/parser/loading/uri.py ---
@@ -44,6 +45,7 @@ def __init__(self, context, location, 
origin_location=None):
 self.location = location
 self._prefixes = StrictList(value_class=basestring)
 self._loader = None
+self._canonical_location = None
--- End diff --

I think "canonical" is a well-known adjective for file systems and URIs and 
might not need explanation. "Canonical" means globally absolute. If you think 
it should be documented, then where? It is used a lot.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153014000
  
--- Diff: aria/parser/loading/uri.py ---
@@ -85,6 +91,11 @@ def close(self):
 def load(self):
 return self._loader.load() if self._loader is not None else None
 
+def get_canonical_location(self):
+self.open()
--- End diff --

For some loaders, yes.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013913
  
--- Diff: aria/parser/loading/loader.py ---
@@ -32,3 +32,6 @@ def close(self):
 
 def load(self):
 raise NotImplementedError
+
+def get_canonical_location(self):  
 # pylint: disable=no-self-use
--- End diff --

What do you propose?


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013856
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
+cache_key = (origin_canonical_location, location)
+try:
+canonical_location = CANONICAL_LOCATION_CACHE[cache_key]
+return loader, canonical_location
+except KeyError:
+pass
+else:
+cache_key = None
+
+canonical_location = loader.get_canonical_location()
+
+# Because retrieving the canonical location can be costly, we will 
try to cache it
+if cache_key is not None:
+CANONICAL_LOCATION_CACHE[cache_key] = canonical_location
+
+return loader, canonical_location
+
+def _create_presentation(self, canonical_location, loader, 
def

[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013674
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -13,15 +13,16 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-
-from ...utils.threading import FixedThreadPoolExecutor
-from ...utils.formatting import json_dumps, yaml_dumps
+from ...utils.formatting import (json_dumps, yaml_dumps)
 from ..loading import UriLocation
-from ..reading import AlreadyReadException
 from ..presentation import PresenterNotFoundError
 from .consumer import Consumer
 
 
+PRESENTATION_CACHE = {}
+CANONICAL_LOCATION_CACHE = {}
+
+
 class Read(Consumer):
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013725
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -31,47 +32,33 @@ class Read(Consumer):
 instances.
 
 It supports agnostic raw data composition for presenters that have
-``_get_import_locations`` and ``_merge_import``.
+``_get_import_locations``, ``_validate_import``, and ``_merge_import``.
 
 To improve performance, loaders are called asynchronously on separate 
threads.
 
 Note that parsing may internally trigger more than one 
loading/reading/presentation
 cycle, for example if the agnostic raw data has dependencies that must 
also be parsed.
 """
 
-def consume(self):
-if self.context.presentation.location is None:
-self.context.validation.report('Presentation consumer: missing 
location')
-return
-
-presenter = None
-imported_presentations = None
+def __init__(self, context):
+super(Read, self).__init__(context)
+self._cache = {}
 
-executor = 
FixedThreadPoolExecutor(size=self.context.presentation.threads,
-   
timeout=self.context.presentation.timeout)
-executor.print_exceptions = 
self.context.presentation.print_exceptions
-try:
-presenter = self._present(self.context.presentation.location, 
None, None, executor)
-executor.drain()
-
-# Handle exceptions
-for e in executor.exceptions:
-self._handle_exception(e)
+def consume(self):
+# Present the main location and all imports recursively
+main, results = self._present_all()
 
-imported_presentations = executor.returns
-finally:
-executor.close()
+# Merge presentations
+main.merge(results, self.context)
 
-# Merge imports
-if (imported_presentations is not None) and hasattr(presenter, 
'_merge_import'):
-for imported_presentation in imported_presentations:
-okay = True
-if hasattr(presenter, '_validate_import'):
-okay = presenter._validate_import(self.context, 
imported_presentation)
-if okay:
-presenter._merge_import(imported_presentation)
+# Cache merged presentations
+if self.context.presentation.cache:
+for result in results:
+result.cache()
 
-self.context.presentation.presenter = presenter
+self.context.presentation.presenter = main.presentation
+if main.canonical_location is not None:
--- End diff --

The loader might fail for some reason to turn the location into a canonical 
location


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013659
  
--- Diff: aria/__init__.py ---
@@ -47,22 +46,19 @@
 
 def install_aria_extensions(strict=True):
 """
-Iterates all Python packages with names beginning with 
``aria_extension_`` and all
-``aria_extension`` entry points and loads them.
-
-It then invokes all registered extension functions.
+Loads all Python packages with names beginning with 
``aria_extension_`` and calls their
+``aria_extension`` initialization entry points if they have them.
 
-:param strict: if set to ``True``, Tries to load extensions with
- dependency versions under consideration. Otherwise tries to load the
- required package without version consideration. Defaults to True.
+:param strict: if ``True`` tries to load extensions while taking into 
account the versions
+ of their dependencies, otherwise ignores versions
 :type strict: bool
 """
 for loader, module_name, _ in iter_modules():
 if module_name.startswith('aria_extension_'):
 loader.find_module(module_name).load_module(module_name)
 for entry_point in 
pkg_resources.iter_entry_points(group='aria_extension'):
 # It should be possible to enable non strict loading - use the 
package
-# that is already installed inside the environment, and forgo the
+# that is already installed inside the environment, and forego the
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013051
  
--- Diff: tests/mechanisms/utils.py ---
@@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import itertools
+
+
+def matrix(*iterables, **kwargs):
+"""
+Generates a matrix of parameters for ``@pytest.mark.parametrize``.
--- End diff --

Because `*iterables` can be any length... if you use both `*` and `**` how 
else can specify a named parameter?


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013058
  
--- Diff: tests/mechanisms/utils.py ---
@@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import itertools
+
+
+def matrix(*iterables, **kwargs):
+"""
+Generates a matrix of parameters for ``@pytest.mark.parametrize``.
+
+The matrix is essentially the Cartesian product of the arguments 
(which should be iterables),
+with the added ability to "flatten" each value by breaking up tuples 
and recombining them into a
+final flat value.
+
+To do such recombination, use the ``counts`` argument (tuple) to 
specify the number of elements
+per value in order. Any count greater than 1 (the default) enables 
recombination of that value.
+
+Example::
+
+  x = ('hello', 'goodbye')
+  y = ('Linus', 'Richard')
+  matrix(x, y) ->
+('hello', 'Linus'),
+('hello', 'Richard'),
+('goodbye', 'Linus'),
+('goodbye', 'Richard')
+
+  y = (('Linus', 'Torvalds'), ('Richard', 'Stallman'))
+  matrix(x, y) ->
+('hello', ('Linus', 'Torvalds')),
+('hello', ('Richard', 'Stallman')),
+('goodbye', ('Linus', 'Torvalds')),
+('goodbye', ('Richard', 'Stallman'))
+
+  matrix(x, y, counts=(1, 2)) ->
+('hello', 'Linus', 'Torvalds'),
+('hello', 'Richard', 'Stallman'),
+('goodbye', 'Linus', 'Torvalds'),
+('goodbye', 'Richard', 'Stallman')
+"""
--- End diff --

I will add more documentation.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153012904
  
--- Diff: tests/mechanisms/parsing/__init__.py ---
@@ -0,0 +1,75 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pytest
+import jinja2
+
+
+LINE_BREAK = '\n' + '-' * 60
+
+
+class Parsed(object):
+def __init__(self):
+self.issues = []
+self.text = ''
+self.verbose = False
+
+def assert_success(self):
+__tracebackhide__ = True # pylint: disable=unused-variable
--- End diff --

A PyTest feature ... I will link to the documentation.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153012524
  
--- Diff: 
tests/extensions/aria_extension_tosca/simple_v1_0/functions/test_function_concat.py
 ---
@@ -0,0 +1,102 @@
+# -*- coding: utf-8 -*-
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+def test_functions_concat_syntax_empty(parser):
+parser.parse_literal("""
+tosca_definitions_version: tosca_simple_yaml_1_0
+node_types:
+  MyType:
+properties:
+  my_parameter:
+type: string
+topology_template:
+  node_templates:
+my_node:
+  type: MyType
+  properties:
+my_parameter: { concat: [] }
+""").assert_success()
+
+
+def test_functions_concat_strings(parser):
+parser.parse_literal("""
+tosca_definitions_version: tosca_simple_yaml_1_0
+node_types:
+  MyType:
+properties:
+  my_parameter:
+type: string
+topology_template:
+  node_templates:
+my_node:
+  type: MyType
+  properties:
+my_parameter: { concat: [ a, b, c ] }
+""").assert_success()
+
+
+def test_functions_concat_mixed(parser):
+parser.parse_literal("""
+tosca_definitions_version: tosca_simple_yaml_1_0
+node_types:
+  MyType:
+properties:
+  my_parameter:
+type: string
+topology_template:
+  node_templates:
+my_node:
+  type: MyType
+  properties:
+my_parameter: { concat: [ a, 1, 1.1, null, [], {} ] }
+""").assert_success()
+
+
+def test_functions_concat_nested(parser):
+parser.parse_literal("""
+tosca_definitions_version: tosca_simple_yaml_1_0
+node_types:
+  MyType:
+properties:
+  my_parameter:
+type: string
+topology_template:
+  node_templates:
+my_node:
+  type: MyType
+  properties:
+my_parameter: { concat: [ a, { concat: [ b, c ] } ] }
+""").assert_success()
+
+
+# Unicode
+
+def test_functions_concat_unicode(parser):
+parser.parse_literal("""
+tosca_definitions_version: tosca_simple_yaml_1_0
+node_types:
+  類型:
+properties:
+  參數:
+type: string
+topology_template:
+  node_templates:
+模板:
+  type: 類型
+  properties:
+參數: { concat: [ 一, 二, 三 ] }
+""").assert_success()
--- End diff --

It would impossible to create such a test that would work with non-TOSCA 
parsers.

In the future we will boost our topology engine tests to test actual 
function evaluation. )And of course things get much more complicated there, 
because we deal with HOST, TARGET, and other topological aspects, as well as 
runtime values in `get_attribute`.)

For this suite, we are just testing parsing of intrinsic functions, not 
their evaluation.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153012216
  
--- Diff: tests/extensions/aria_extension_tosca/simple_v1_0/data.py ---
@@ -0,0 +1,82 @@
+# -*- coding: utf-8 -*-
--- End diff --

Because 1) Python doesn't have a "constants" feature, and 2) even if it 
did, that would be an implementation trait, and these data points would still 
work as data points if they weren't constants. I don't see why you would want 
to name the file for a technical aspect of its content. It would be like 
separating this file into `floats.py` and `ints.py` -- that doesn't help 
understanding usage.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153011903
  
--- Diff: tests/extensions/aria_extension_tosca/conftest.py ---
@@ -0,0 +1,45 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+PyTest configuration module.
+
+Add support for a "--tosca-parser" CLI option.
+"""
+
+import pytest
+
+from ...mechanisms.parsing.aria import AriaParser
+
+
+def pytest_addoption(parser):
--- End diff --

These are PyTest-defined hooks. I will add a link to the PyTest 
documentation.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153011505
  
--- Diff: 
extensions/aria_extension_tosca/simple_v1_0/modeling/data_types.py ---
@@ -318,15 +318,15 @@ def report(message, constraint):
 #
 
 def get_data_type_value(context, presentation, field_name, type_name):
-the_type = get_type_by_name(context, type_name, 'data_types')
-if the_type is not None:
-value = getattr(presentation, field_name)
-if value is not None:
+value = getattr(presentation, field_name)
+if value is not None:
+the_type = get_type_by_name(context, type_name, 'data_types')
--- End diff --

+1 to `data_type`


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153011389
  
--- Diff: 
extensions/aria_extension_tosca/simple_v1_0/modeling/capabilities.py ---
@@ -139,9 +139,18 @@ def get_template_capabilities(context, presentation):
 if values:
 capability_assignment._raw['properties'] = values
 capability_assignment._reset_method_cache()
+
--- End diff --

It's explained in the `__init__.py` of the package:

```
"""
Creates ARIA service template models based on the TOSCA presentation.

Relies on many helper methods in the presentation classes.
"""
```


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153011299
  
--- Diff: examples/hello-world/hello-world.yaml ---
@@ -2,30 +2,24 @@ tosca_definitions_version: tosca_simple_yaml_1_0
 
 node_types:
 
-  WebServer:
-derived_from: tosca:Root
-capabilities:
-  host:
-type: tosca:Container
-
-  WebApp:
+  HelloWorld:
--- End diff --

Where?


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153011276
  
--- Diff: extensions/aria_extension_tosca/simple_v1_0/modeling/__init__.py 
---
@@ -640,24 +649,33 @@ def create_node_filter_constraints(context, 
node_filter, target_node_template_co
 for property_name, constraint_clause in properties:
 constraint = create_constraint(context, node_filter, 
constraint_clause,
property_name, 
capability_name)
-target_node_template_constraints.append(constraint)
+if constraint is not None:
+target_node_template_constraints.append(constraint)
 
 
-def create_constraint(context, node_filter, constraint_clause, 
property_name, capability_name): # pylint: disable=too-many-return-statements
+def create_constraint(context, node_filter, constraint_clause, 
property_name, capability_name): # pylint: 
disable=too-many-return-statements
+if (not isinstance(constraint_clause._raw, dict)) or 
(len(constraint_clause._raw) != 1):
+context.validation.report(
+u'node_filter constraint is not a dict with one key: {0}'
+.format(safe_repr(constraint_clause._raw)),
+locator=node_filter._locator,
+level=Issue.FIELD)
+return None
+
 constraint_key = constraint_clause._raw.keys()[0]
 
 the_type = constraint_clause._get_type(context)
 
-def coerce_constraint(constraint):
+def coerce_constraint(constraint, the_type=the_type):
--- End diff --

+1 renamed to `value_type`


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r15302
  
--- Diff: aria/utils/threading.py ---
@@ -93,7 +92,104 @@ def sum(arg1, arg2):
 print executor.returns
 """
 
-_CYANIDE = object()  # Special task marker used to kill worker threads.
+def __init__(self, print_exceptions=False):
+self.print_exceptions = print_exceptions
+
+def submit(self, func, *args, **kwargs):
+"""
+Submit a task for execution.
+
+The task will be called ASAP on the next available worker thread 
in the pool.
+
+:raises ExecutorException: if cannot be submitted
+"""
+raise NotImplementedError
+
+def close(self):
+"""
+Blocks until all current tasks finish execution and all worker 
threads are dead.
+
+You cannot submit tasks anymore after calling this.
+
+This is called automatically upon exit if you are using the 
``with`` keyword.
+"""
+pass
+
+def drain(self):
+"""
+Blocks until all current tasks finish execution, but leaves the 
worker threads alive.
+"""
+pass
+
+@property
+def returns(self):
+"""
+The returned values from all tasks, in order of submission.
+"""
+return ()
+
+@property
+def exceptions(self):
+"""
+The raised exceptions from all tasks, in order of submission.
+"""
+return ()
+
+def raise_first(self):
+"""
+If exceptions were thrown by any task, then the first one will be 
raised.
+
+This is rather arbitrary: proper handling would involve iterating 
all the exceptions.
+However, if you want to use the "raise" mechanism, you are limited 
to raising only one of
+them.
+"""
+
+exceptions = self.exceptions
+if exceptions:
+raise exceptions[0]
+
+def __enter__(self):
+return self
+
+def __exit__(self, the_type, value, traceback):
+pass
+
+
+class BlockingExecutor(Executor):
+"""
+Executes tasks in the current thread.
+"""
+
+def __init__(self, print_exceptions=False):
+super(BlockingExecutor, 
self).__init__(print_exceptions=print_exceptions)
--- End diff --

Sorry guys, I disagree with both of you. I think each class should be 
explicit as possible in the arguments it accepts (so not kwargs) but also think 
we should reuse the logic of the base class (so send it to the super's 
`__init__`). I would rather keep this as is.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153010864
  
--- Diff: aria/utils/collections.py ---
@@ -220,27 +225,30 @@ def __setitem__(self, key, value, **_):
 return super(StrictDict, self).__setitem__(key, value)
 
 
-def merge(dict_a, dict_b, path=None, strict=False):
+def merge(dict_a, dict_b, copy=True, strict=False, path=None):
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153009548
  
--- Diff: aria/parser/reading/yaml.py ---
@@ -16,18 +16,30 @@
 from .reader import Reader
 from .locator import Locator
 from .exceptions import ReaderSyntaxError
-from .locator import LocatableString, LocatableInt, LocatableFloat
+from .locator import (LocatableString, LocatableInt, LocatableFloat)
 
-# Add our types to ruamel.yaml
+
+MERGE_TAG = u'tag:yaml.org,2002:merge'
+MAP_TAG = u'tag:yaml.org,2002:map'
+
+
+# Add our types to RoundTripRepresenter
 yaml.representer.RoundTripRepresenter.add_representer(
 LocatableString, 
yaml.representer.RoundTripRepresenter.represent_unicode)
 yaml.representer.RoundTripRepresenter.add_representer(
 LocatableInt, yaml.representer.RoundTripRepresenter.represent_int)
 yaml.representer.RoundTripRepresenter.add_representer(
 LocatableFloat, yaml.representer.RoundTripRepresenter.represent_float)
 
-MERGE_TAG = u'tag:yaml.org,2002:merge'
-MAP_TAG = u'tag:yaml.org,2002:map'
+
+def construct_yaml_map(self, node):
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153009232
  
--- Diff: aria/parser/presentation/field_validators.py ---
@@ -14,12 +14,29 @@
 # limitations under the License.
 
 
+from ...utils.formatting import safe_repr
 from ..validation import Issue
 from .utils import (parse_types_dict_names, report_issue_for_unknown_type,
 report_issue_for_parent_is_self, 
report_issue_for_unknown_parent_type,
 report_issue_for_circular_type_hierarchy)
 
 
+def not_negative_validator(field, presentation, context):
+"""
+Makes sure that the field is not negative.
+
+Can be used with the :func:`field_validator` decorator.
+"""
+
+field.default_validate(presentation, context)
+value = getattr(presentation, field.name)
+if (value is not None) and (value < 0):
--- End diff --

The field also has a type which is enforced as a separate validation. I 
wanted this particular validator to be general purpose, so it would work with 
any kind of object that supports comparison ("__gt__" and other magic methods).


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153009028
  
--- Diff: aria/parser/presentation/context.py ---
@@ -63,3 +67,12 @@ def get_from_dict(self, *names):
 """
 
 return self.presenter._get_from_dict(*names) if self.presenter is 
not None else None
+
+def create_executor(self):
+if self.threads == 1:
--- End diff --

What do you mean by "initiator"? You can configure the thread count in the 
parser context, just like everything else, whenever you start the parsing 
process. Normally you don't need to do this -- the defaults should be fine. 
Just for running tests in tox (which is multiprocess) it makes sense to 
override the default and enforce single-threading.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153008880
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
+cache_key = (origin_canonical_location, location)
+try:
+canonical_location = CANONICAL_LOCATION_CACHE[cache_key]
+return loader, canonical_location
+except KeyError:
+pass
+else:
+cache_key = None
+
+canonical_location = loader.get_canonical_location()
+
+# Because retrieving the canonical location can be costly, we will 
try to cache it
+if cache_key is not None:
+CANONICAL_LOCATION_CACHE[cache_key] = canonical_location
+
+return loader, canonical_location
+
+def _create_presentation(self, canonical_location, loader, 
def

[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153008578
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
+cache_key = (origin_canonical_location, location)
+try:
+canonical_location = CANONICAL_LOCATION_CACHE[cache_key]
+return loader, canonical_location
+except KeyError:
+pass
+else:
+cache_key = None
+
+canonical_location = loader.get_canonical_location()
+
+# Because retrieving the canonical location can be costly, we will 
try to cache it
+if cache_key is not None:
+CANONICAL_LOCATION_CACHE[cache_key] = canonical_location
+
+return loader, canonical_location
+
+def _create_presentation(self, canonical_location, loader, 
def

[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153008582
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
+cache_key = (origin_canonical_location, location)
+try:
+canonical_location = CANONICAL_LOCATION_CACHE[cache_key]
+return loader, canonical_location
+except KeyError:
+pass
+else:
+cache_key = None
+
+canonical_location = loader.get_canonical_location()
+
+# Because retrieving the canonical location can be costly, we will 
try to cache it
+if cache_key is not None:
+CANONICAL_LOCATION_CACHE[cache_key] = canonical_location
+
+return loader, canonical_location
+
+def _create_presentation(self, canonical_location, loader, 
def

[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153008552
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
+cache_key = (origin_canonical_location, location)
+try:
+canonical_location = CANONICAL_LOCATION_CACHE[cache_key]
+return loader, canonical_location
+except KeyError:
+pass
+else:
+cache_key = None
+
+canonical_location = loader.get_canonical_location()
+
+# Because retrieving the canonical location can be costly, we will 
try to cache it
+if cache_key is not None:
+CANONICAL_LOCATION_CACHE[cache_key] = canonical_location
+
+return loader, canonical_location
+
+def _create_presentation(self, canonical_location, loader, 
def

[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153008314
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
--- End diff --

Perhaps the goal wasn't clear here: "location" is relative, while 
"canonical_location" is globally absolute. So the cache key has to be a 
combination of both, hence a tuple. I will add documentation to clarify this.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153008025
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
--- End diff --

+1 removed


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153007941
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
--- End diff --

Yes there is. An "if" statement plus a retrieval statement are non-atomic, 
and since we are in a concurrent situation it's possible that between the if 
and the retrieval that the data will removed from the cache, causing the 
retrieval to fail with an exception even though the "if" succeeded. A single 
retrieval is atomic.

(This is the generally the idiomatic "Python way" to do this -- always 
choose the atomic!)


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153007549
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
--- End diff --

It's impossible to separate because they are produced together via the 
thread pool. I'll add method documentation to clarify.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153006931
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -31,47 +32,33 @@ class Read(Consumer):
 instances.
 
 It supports agnostic raw data composition for presenters that have
-``_get_import_locations`` and ``_merge_import``.
+``_get_import_locations``, ``_validate_import``, and ``_merge_import``.
 
 To improve performance, loaders are called asynchronously on separate 
threads.
 
 Note that parsing may internally trigger more than one 
loading/reading/presentation
 cycle, for example if the agnostic raw data has dependencies that must 
also be parsed.
 """
 
-def consume(self):
-if self.context.presentation.location is None:
-self.context.validation.report('Presentation consumer: missing 
location')
-return
-
-presenter = None
-imported_presentations = None
+def __init__(self, context):
+super(Read, self).__init__(context)
+self._cache = {}
 
-executor = 
FixedThreadPoolExecutor(size=self.context.presentation.threads,
-   
timeout=self.context.presentation.timeout)
-executor.print_exceptions = 
self.context.presentation.print_exceptions
-try:
-presenter = self._present(self.context.presentation.location, 
None, None, executor)
-executor.drain()
-
-# Handle exceptions
-for e in executor.exceptions:
-self._handle_exception(e)
+def consume(self):
+# Present the main location and all imports recursively
+main, results = self._present_all()
 
-imported_presentations = executor.returns
-finally:
-executor.close()
+# Merge presentations
+main.merge(results, self.context)
 
-# Merge imports
-if (imported_presentations is not None) and hasattr(presenter, 
'_merge_import'):
-for imported_presentation in imported_presentations:
-okay = True
-if hasattr(presenter, '_validate_import'):
-okay = presenter._validate_import(self.context, 
imported_presentation)
-if okay:
-presenter._merge_import(imported_presentation)
+# Cache merged presentations
+if self.context.presentation.cache:
+for result in results:
+result.cache()
--- End diff --

Yes, it needs to be global for high-efficiency concurrent testing: there, 
we have many threads each creating their own contexts and consumers, but we 
want them to all enjoy the cached results.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153006807
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -31,47 +32,33 @@ class Read(Consumer):
 instances.
 
 It supports agnostic raw data composition for presenters that have
-``_get_import_locations`` and ``_merge_import``.
+``_get_import_locations``, ``_validate_import``, and ``_merge_import``.
 
 To improve performance, loaders are called asynchronously on separate 
threads.
 
 Note that parsing may internally trigger more than one 
loading/reading/presentation
 cycle, for example if the agnostic raw data has dependencies that must 
also be parsed.
 """
 
-def consume(self):
-if self.context.presentation.location is None:
-self.context.validation.report('Presentation consumer: missing 
location')
-return
-
-presenter = None
-imported_presentations = None
+def __init__(self, context):
+super(Read, self).__init__(context)
+self._cache = {}
 
-executor = 
FixedThreadPoolExecutor(size=self.context.presentation.threads,
-   
timeout=self.context.presentation.timeout)
-executor.print_exceptions = 
self.context.presentation.print_exceptions
-try:
-presenter = self._present(self.context.presentation.location, 
None, None, executor)
-executor.drain()
-
-# Handle exceptions
-for e in executor.exceptions:
-self._handle_exception(e)
+def consume(self):
+# Present the main location and all imports recursively
+main, results = self._present_all()
--- End diff --

+1 renamed to "main_result" and "all_results"


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153006422
  
--- Diff: extensions/aria_extension_tosca/simple_v1_0/assignments.py ---
@@ -144,6 +144,17 @@ def _get_type(self, context):
 # In RelationshipAssignment
 the_type = the_type[0] # This could be a RelationshipTemplate
 
+if isinstance(self._container._container, 
RequirementAssignment):
--- End diff --

+1 refactored


---


[GitHub] incubator-ariatosca issue #199: ARIA-392 Failing to load ruamel.yaml

2017-11-09 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/199
  
OK. So I guess you don't want to change the imports in `/tests/`?

On Wed, Nov 8, 2017 at 4:32 PM, Maxim Orlov 
wrote:

> so, i think we're ready
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> 
<https://github.com/apache/incubator-ariatosca/pull/199#issuecomment-342983490>,
> or mute the thread
> 
<https://github.com/notifications/unsubscribe-auth/ABF_4Z5-j-tU5KsVd0FKsJcbugbiCbdXks5s0iwOgaJpZM4QC6qG>
> .
>



---


[GitHub] incubator-ariatosca issue #199: ARIA-392 Failing to load ruamel.yaml

2017-11-08 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/199
  
I think we are. Also: I only suggest, you of course can decide differently.
Maybe do one more commit to make sure we are on the same page? Also feel
free to rebase and squash, it's actually easier for a small PR like this.

On Wed, Nov 8, 2017 at 2:44 PM, Maxim Orlov 
wrote:

> OK, np just wanted to make sure we're on the same page...
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> 
<https://github.com/apache/incubator-ariatosca/pull/199#issuecomment-342953301>,
> or mute the thread
> 
<https://github.com/notifications/unsubscribe-auth/ABF_4eMjoGH5Nm8Cy4NEX7TY_wpSXRR6ks5s0hKdgaJpZM4QC6qG>
> .
>



---


[GitHub] incubator-ariatosca issue #199: ARIA-392 Failing to load ruamel.yaml

2017-11-08 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/199
  
Still, I don't see why it should be in `aria/__init__.py` and be always
activated. I think it should be in activated only for a module that
specifically is using ruamel. It would still only happen once.

On Wed, Nov 8, 2017 at 2:31 PM, Maxim Orlov 
wrote:

> This code only runs once, it doesn't provide with a patched up ruamel, it
> patched the loading mechanism...so loading should work everywhere, as long
> as the code was run...
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> 
<https://github.com/apache/incubator-ariatosca/pull/199#issuecomment-342949895>,
> or mute the thread
> 
<https://github.com/notifications/unsubscribe-auth/ABF_4dyDXJXUA4TT4Mc2ofbzAiholfI7ks5s0g-7gaJpZM4QC6qG>
> .
>



---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-08 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r149718206
  
--- Diff: 
tests/extensions/aria_extension_tosca/simple_nfv_v1_0/test_profile.py ---
@@ -0,0 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
--- End diff --

You are looking here specifically at the test to make sure the NFV profile 
is valid. This is not where the unit tests for the parser lie.


---


[GitHub] incubator-ariatosca issue #199: ARIA-392 Failing to load ruamel.yaml

2017-11-08 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/199
  
@mxmrlv You didn't address these:

1. Let's move this to `from aria.utils.yaml import yaml`. I don't think the 
aria `__init__.py` is the right place for this.
2. Let's go over *all* places in the codebase where we import ruamel to use 
the above.


---


[GitHub] incubator-ariatosca issue #200: ARIA-393 Enable configuration of extension l...

2017-11-06 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/200
  
Could you also add this to the doc?

`:type strict: bool`

After that it's fine by me to squash and merge!


---


[GitHub] incubator-ariatosca pull request #199: ARIA-392 Failing to load ruamel.yaml

2017-11-06 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/199#discussion_r149136650
  
--- Diff: aria/__init__.py ---
@@ -17,14 +17,45 @@
 The ARIA root package provides entry points for extension and storage 
initialization.
 """
 
+import os
 import sys
+import types
 
 from pkgutil import iter_modules
 import pkg_resources
-
 aria_package_name = 'apache-ariatosca'
 __version__ = pkg_resources.get_distribution(aria_package_name).version
 
+
+
+
+try:
+import ruamel   # noqa: F401
+except ImportError:
--- End diff --

I just think `from aria.utils.yaml import yaml` will give us a clean 
separation in case we have a better fix in the future, so I do prefer that.

Also, for this PR to merge we need to change *all* our uses of ruamel in 
the codebase to that import.


---


[GitHub] incubator-ariatosca pull request #199: ARIA-392 Failing to load ruamel.yaml

2017-11-02 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/199#discussion_r148628206
  
--- Diff: aria/__init__.py ---
@@ -17,14 +17,45 @@
 The ARIA root package provides entry points for extension and storage 
initialization.
 """
 
+import os
 import sys
+import types
 
 from pkgutil import iter_modules
 import pkg_resources
-
 aria_package_name = 'apache-ariatosca'
 __version__ = pkg_resources.get_distribution(aria_package_name).version
 
+
+
+
+try:
+import ruamel   # noqa: F401
+except ImportError:
--- End diff --

Another follow up: if this is still an issue with newer versions of ruaeml, 
I strongly suggest we open a bug at that project. I've done it in the past and 
have gotten things fixed. If you do that, make sure to link that bug report in 
a code comment here -- in case it's solved in the future, we should remove our 
workaround.


---


[GitHub] incubator-ariatosca pull request #199: ARIA-392 Failing to load ruamel.yaml

2017-11-02 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/199#discussion_r148621970
  
--- Diff: aria/__init__.py ---
@@ -17,14 +17,45 @@
 The ARIA root package provides entry points for extension and storage 
initialization.
 """
 
+import os
 import sys
+import types
 
 from pkgutil import iter_modules
 import pkg_resources
-
 aria_package_name = 'apache-ariatosca'
 __version__ = pkg_resources.get_distribution(aria_package_name).version
 
+
+
+
+try:
+import ruamel   # noqa: F401
+except ImportError:
--- End diff --

I recent commit removed Py2.6 support and allowed us to upgrade ruamel to 
its latest version. So, I suggest rebasing on master and testing again: perhaps 
this hack isn't needed anymore.

Assuming this hack is still needed, we need to find a way to generalize it, 
since we import ruamel in many, many places in the code. Perhaps we should have 
an {{aria.utils.yaml}} package where we do all this work once. Everywhere else 
we use yaml would then have to be replaced to {{from aria.utils.yaml import 
yaml}}. A nice side benefit would be that it would be easy to replace ruamel 
with a different yaml parser is that is ever necessary.


---


[GitHub] incubator-ariatosca pull request #199: ARIA-392 Failing to load ruamel.yaml

2017-11-02 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/199#discussion_r148620801
  
--- Diff: aria/__init__.py ---
@@ -17,14 +17,45 @@
 The ARIA root package provides entry points for extension and storage 
initialization.
 """
 
+import os
 import sys
+import types
 
 from pkgutil import iter_modules
 import pkg_resources
-
 aria_package_name = 'apache-ariatosca'
 __version__ = pkg_resources.get_distribution(aria_package_name).version
 
+
+
+
+try:
+import ruamel   # noqa: F401
--- End diff --

I think we've decided to remove IDE-specific comments.


---


[GitHub] incubator-ariatosca issue #200: ARIA-393 Enable configuration of extension l...

2017-11-02 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/200
  
Could you just fix the Sphinx docstring to include the argument?


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-10-31 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/207

ARIA-1 Parser test suite

This commit additionally fixes many parser bugs revealed by the test
suite, which includes adding validations that were missing.

A new "extensions" tox suite is introduced.

The /tests/parser cases were refactored into /tests/topology and
/tests/extensions.

The Hello World example was fixed and refactored, as it in fact had
invalid TOSCA (it previously passed due to a missing validation).

Parser performance was greatly improved by:

1. Switching to the YAML C library
2. Aggressive caching of parsed presentations
3. The ability to skip importing the TOSCA profile
4. A new deepcopy_fast util
5. A new BlockingExecutor that is much faster for single-threaded use

Unicode is now fully supported for all validation and log message. This
requires the use a unicode (u'' notation) for all .format specs.

Additionally, PyLint comment directives have been standardized by
pushing them to column 100.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-1-parser-test-suite

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/207.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #207


commit 22cee6d90c5873230eabe7449b5bbccf33222bad
Author: Tal Liron 
Date:   2017-08-17T22:50:27Z

ARIA-1 Parser test suite

This commit additionally fixes many parser bugs revealed by the test
suite, which includes adding validations that were missing.

A new "extensions" tox suite is introduced.

The /tests/parser cases were refactored into /tests/topology and
/tests/extensions.

The Hello World example was fixed and refactored, as it in fact had
invalid TOSCA (it previously passed due to a missing validation).

Parser performance was greatly improved by:

1. Switching to the YAML C library
2. Aggressive caching of parsed presentations
3. The ability to skip importing the TOSCA profile
4. A new deepcopy_fast util
5. A new BlockingExecutor that is much faster for single-threaded use

Unicode is now fully supported for all validation and log message. This
requires the use a unicode (u'' notation) for all .format specs.

Additionally, PyLint comment directives have been standardized by
pushing them to column 100.




---


[GitHub] incubator-ariatosca pull request #206: ARIA-405 Remove support for Python 2....

2017-10-30 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/206

ARIA-405 Remove support for Python 2.6

* setup.py now requires Python 2.7
* Upgrade all 3rd party libraries to recent versions
* API changes to networkx
* Stricter yaml.load call for ruamel.yaml
* Remove iter_modules implementation for Python 2.6
* Remove NullHander implementation for Python 2.6
* Remove "py26" tox test environments

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-405-remove-py26

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/206.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #206


commit b1f03efcedd814f5fe38f051bfa0106cc715586f
Author: Tal Liron 
Date:   2017-10-30T21:56:57Z

ARIA-405 Remove support for Python 2.6

* setup.py now requires Python 2.7
* Upgrade all 3rd party libraries to recent versions
* API changes to networkx
* Stricter yaml.load call for ruamel.yaml
* Remove iter_modules implementation for Python 2.6
* Remove NullHander implementation for Python 2.6
* Remove "py26" tox test environments




---


[GitHub] incubator-ariatosca pull request #201: ARIA-395 Fix AppVeyor failures due to...

2017-10-30 Thread tliron
Github user tliron closed the pull request at:

https://github.com/apache/incubator-ariatosca/pull/201


---


[GitHub] incubator-ariatosca pull request #201: ARIA-395 Fix AppVeyor failures due to...

2017-10-23 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/201

ARIA-395 Fix AppVeyor failures due to use of SSL



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-395-appveyor-failures

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/201.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #201


commit a7c381fbace2ffe0435e9d59cd7a8aedd1f18061
Author: Tal Liron 
Date:   2017-10-23T17:08:59Z

ARIA-395 Fix AppVeyor failures due to use of SSL




---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-11 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r138103044
  
--- Diff: .travis.yml ---
@@ -10,36 +10,55 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-sudo: false
+# We need to set "sudo: true" in order to use a virtual machine instead of 
a container, because
+# SSH tests fail in the container. See:
+# 
https://docs.travis-ci.com/user/reference/overview/#Virtualization-environments
+
+dist: trusty
+sudo: true
 
 language: python
 
-dist: precise
+addons:
+  apt:
+sources:
+  - sourceline: 'ppa:fkrull/deadsnakes'
+packages:
+  # Ubuntu 14.04 (trusty) does not come with Python 2.6, so we will 
install it from Felix
+  # Krull's PPA
+  - python2.6
+  - python2.6-dev
+
 python:
+  # We handle Python 2.6 testing from within tox (see tox.ini); note that 
this means that we run
+  # tox itself always from Python 2.7
   - '2.7'
 
 env:
-  - TOX_ENV=pylint_code
-  - TOX_ENV=pylint_tests
-  - TOX_ENV=py27
-  - TOX_ENV=py26
-  - TOX_ENV=py27e2e
-  - TOX_ENV=py26e2e
-  - TOX_ENV=py27ssh
-  - TOX_ENV=py26ssh
-  - TOX_ENV=docs
-
-install:
+  # The PYTEST_PROCESSES environment var is used in tox.ini to override 
the --numprocesses argument
+  # for PyTest's xdist plugin. The reason this is necessary is that 
conventional Travis environments
+  # may report a large amount of available CPUs, but they they are greatly 
restricted. Through trial
+  # and error we found that more than 1 process may result in failures.
+  - PYTEST_PROCESSES=1 TOX_ENV=pylint_code
+  - PYTEST_PROCESSES=1 TOX_ENV=pylint_tests
+  - PYTEST_PROCESSES=1 TOX_ENV=py27
+  - PYTEST_PROCESSES=1 TOX_ENV=py26
+  - PYTEST_PROCESSES=1 TOX_ENV=py27e2e
+  - PYTEST_PROCESSES=1 TOX_ENV=py26e2e
+  - PYTEST_PROCESSES=1 TOX_ENV=py27ssh
+  - PYTEST_PROCESSES=1 TOX_ENV=py26ssh
+  - PYTEST_PROCESSES=1 TOX_ENV=docs
+
+before_install:
+  # Create SSH keys for SSH tests
+  - ssh-keygen -f $HOME/.ssh/id_rsa -t rsa -N ''
+  - cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
+
+  # Python dependencies
   - pip install --upgrade pip
   - pip install --upgrade setuptools
   - pip install tox
-
-script:
-  - pip --version
   - tox --version
-  - PYTEST_PROCESSES=1 tox -e $TOX_ENV
--- End diff --

While working on this file it seemed odd to me that this one env variable 
is not in the .travis.yml "env" section. It makes sense to group it here, too, 
because in the future we might find that some tox envs run fine with 
multiprocess. (The Travis VMs/containers all have 2 cores, actually. I still 
don't know why are our tests fail when we switch to 2 processes.)


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-11 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r138102421
  
--- Diff: .travis.yml ---
@@ -10,36 +10,55 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-sudo: false
+# We need to set "sudo: true" in order to use a virtual machine instead of 
a container, because
+# SSH tests fail in the container. See:
+# 
https://docs.travis-ci.com/user/reference/overview/#Virtualization-environments
+
+dist: trusty
+sudo: true
 
 language: python
 
-dist: precise
+addons:
+  apt:
+sources:
+  - sourceline: 'ppa:fkrull/deadsnakes'
+packages:
+  # Ubuntu 14.04 (trusty) does not come with Python 2.6, so we will 
install it from Felix
+  # Krull's PPA
+  - python2.6
+  - python2.6-dev
+
 python:
+  # We handle Python 2.6 testing from within tox (see tox.ini); note that 
this means that we run
+  # tox itself always from Python 2.7
   - '2.7'
 
 env:
-  - TOX_ENV=pylint_code
-  - TOX_ENV=pylint_tests
-  - TOX_ENV=py27
-  - TOX_ENV=py26
-  - TOX_ENV=py27e2e
-  - TOX_ENV=py26e2e
-  - TOX_ENV=py27ssh
-  - TOX_ENV=py26ssh
-  - TOX_ENV=docs
-
-install:
+  # The PYTEST_PROCESSES environment var is used in tox.ini to override 
the --numprocesses argument
+  # for PyTest's xdist plugin. The reason this is necessary is that 
conventional Travis environments
+  # may report a large amount of available CPUs, but they they are greatly 
restricted. Through trial
+  # and error we found that more than 1 process may result in failures.
+  - PYTEST_PROCESSES=1 TOX_ENV=pylint_code
+  - PYTEST_PROCESSES=1 TOX_ENV=pylint_tests
+  - PYTEST_PROCESSES=1 TOX_ENV=py27
+  - PYTEST_PROCESSES=1 TOX_ENV=py26
+  - PYTEST_PROCESSES=1 TOX_ENV=py27e2e
+  - PYTEST_PROCESSES=1 TOX_ENV=py26e2e
+  - PYTEST_PROCESSES=1 TOX_ENV=py27ssh
+  - PYTEST_PROCESSES=1 TOX_ENV=py26ssh
+  - PYTEST_PROCESSES=1 TOX_ENV=docs
+
+before_install:
+  # Create SSH keys for SSH tests
+  - ssh-keygen -f $HOME/.ssh/id_rsa -t rsa -N ''
+  - cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
+
+  # Python dependencies
   - pip install --upgrade pip
   - pip install --upgrade setuptools
   - pip install tox
-
-script:
-  - pip --version
--- End diff --

While working on fixing the tests, I actually added a whole bunch of stuff 
here to assist debugging. And then it occurred to me, why test pip version 
specifically? There is so much other stuff here that might be useful. (Also, we 
explicitly upgrade pip to the latest version when the test runs.) I think it's 
more confusing to have this here. If someone needs to debug something specific 
with the test mechanism on Travis, they can add whatever they want here (like I 
did) that is relevant.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-11 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r138101902
  
--- Diff: .travis.yml ---
@@ -10,36 +10,55 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-sudo: false
+# We need to set "sudo: true" in order to use a virtual machine instead of 
a container, because
+# SSH tests fail in the container. See:
+# 
https://docs.travis-ci.com/user/reference/overview/#Virtualization-environments
+
+dist: trusty
+sudo: true
 
 language: python
 
-dist: precise
+addons:
+  apt:
+sources:
+  - sourceline: 'ppa:fkrull/deadsnakes'
+packages:
+  # Ubuntu 14.04 (trusty) does not come with Python 2.6, so we will 
install it from Felix
+  # Krull's PPA
+  - python2.6
+  - python2.6-dev
+
 python:
+  # We handle Python 2.6 testing from within tox (see tox.ini); note that 
this means that we run
+  # tox itself always from Python 2.7
   - '2.7'
 
 env:
-  - TOX_ENV=pylint_code
-  - TOX_ENV=pylint_tests
-  - TOX_ENV=py27
-  - TOX_ENV=py26
-  - TOX_ENV=py27e2e
-  - TOX_ENV=py26e2e
-  - TOX_ENV=py27ssh
-  - TOX_ENV=py26ssh
-  - TOX_ENV=docs
-
-install:
+  # The PYTEST_PROCESSES environment var is used in tox.ini to override 
the --numprocesses argument
+  # for PyTest's xdist plugin. The reason this is necessary is that 
conventional Travis environments
+  # may report a large amount of available CPUs, but they they are greatly 
restricted. Through trial
+  # and error we found that more than 1 process may result in failures.
+  - PYTEST_PROCESSES=1 TOX_ENV=pylint_code
+  - PYTEST_PROCESSES=1 TOX_ENV=pylint_tests
+  - PYTEST_PROCESSES=1 TOX_ENV=py27
+  - PYTEST_PROCESSES=1 TOX_ENV=py26
+  - PYTEST_PROCESSES=1 TOX_ENV=py27e2e
+  - PYTEST_PROCESSES=1 TOX_ENV=py26e2e
+  - PYTEST_PROCESSES=1 TOX_ENV=py27ssh
+  - PYTEST_PROCESSES=1 TOX_ENV=py26ssh
+  - PYTEST_PROCESSES=1 TOX_ENV=docs
+
+before_install:
+  # Create SSH keys for SSH tests
+  - ssh-keygen -f $HOME/.ssh/id_rsa -t rsa -N ''
--- End diff --

As I mentioned in Slack, I added the use of keys during trying to fix the 
SSH tests, but when I managed to get them passing finally I decided to keep 
this change because I feel it tests more things: specifically the parts of 
Python cryptography support. Since using keys instead of passwords is by far 
more common in cloud platforms (it should be required, really!) I thought it 
would more sense to keep this test as is.

Also not that our SSH tests are specifically designed for Travis, anyway, 
so I don't see a problem with the .travis.yml file corresponding to it.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-06 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137270329
  
--- Diff: examples/clearwater/scripts/bono/delete.sh ---
@@ -0,0 +1,15 @@
+#!/bin/bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
--- End diff --

+1 added TODO


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137070505
  
--- Diff: aria/modeling/service_instance.py ---
@@ -228,6 +228,80 @@ def service_template_fk(cls):
 :type: :class:`~datetime.datetime`
 """)
 
+def get_node_by_type(self, type_name):
+"""
+Finds the first node of a type (or descendent type).
+"""
+service_template = self.service_template
+
+if service_template is not None:
+node_types = service_template.node_types
+if node_types is not None:
+for node in self.nodes.itervalues():
+if node_types.is_descendant(type_name, node.type.name):
+return node
+
+return None
+
+def get_policy_by_type(self, type_name):
+"""
+Finds the first policy of a type (or descendent type).
+"""
+service_template = self.service_template
+
+if service_template is not None:
+policy_types = service_template.policy_types
+if policy_types is not None:
+for policy in self.policies.itervalues():
+if policy_types.is_descendant(type_name, 
policy.type.name):
+return policy
+
+return None
+
+def satisfy_requirements(self):
+satisfied = True
+for node in self.nodes.itervalues():
+if not node.satisfy_requirements():
+satisfied = False
+return satisfied
+
+def validate_capabilities(self):
+satisfied = True
+for node in self.nodes.itervalues():
+if not node.validate_capabilities():
+satisfied = False
+return satisfied
+
+def find_hosts(self):
+for node in self.nodes.itervalues():
+node.find_host()
+
+def configure_operations(self):
+for node in self.nodes.itervalues():
+node.configure_operations()
+for group in self.groups.itervalues():
+group.configure_operations()
+for operation in self.workflows.itervalues():
+operation.configure()
+
+def is_node_a_target(self, target_node):
+for node in self.nodes.itervalues():
+if self._is_node_a_target(node, target_node):
+return True
+return False
+
+def _is_node_a_target(self, source_node, target_node):
--- End diff --

This is a leftover utility function from a previous experiment of mine (for 
solving an SMTP config challenge). It's not needed anymore, so I will just 
remove it.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137062291
  
--- Diff: aria/modeling/service_instance.py ---
@@ -228,6 +228,80 @@ def service_template_fk(cls):
 :type: :class:`~datetime.datetime`
 """)
 
+def get_node_by_type(self, type_name):
+"""
+Finds the first node of a type (or descendent type).
+"""
+service_template = self.service_template
+
+if service_template is not None:
+node_types = service_template.node_types
+if node_types is not None:
+for node in self.nodes.itervalues():
+if node_types.is_descendant(type_name, node.type.name):
+return node
+
+return None
+
+def get_policy_by_type(self, type_name):
+"""
+Finds the first policy of a type (or descendent type).
+"""
+service_template = self.service_template
+
+if service_template is not None:
+policy_types = service_template.policy_types
+if policy_types is not None:
+for policy in self.policies.itervalues():
+if policy_types.is_descendant(type_name, 
policy.type.name):
+return policy
+
+return None
+
+def satisfy_requirements(self):
--- End diff --

The `get_node_by_type` and `get_policy_by_type` are utility functions for 
use by the ctx proxy. You can see them being used in 
`clearwater/scripts/hosts/configure.sh`. I see them as being very useful for 
users.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137052935
  
--- Diff: 
extensions/aria_extension_tosca/simple_v1_0/modeling/data_types.py ---
@@ -159,10 +159,6 @@ def get_data_type(context, presentation, field_name, 
allow_none=False):
 else:
 return str
 
-# Make sure not derived from self
-if type_name == presentation._name:
--- End diff --

The previous code was broken and threw an exception when reaching those 
lines. I found the bug by accident while working on this so just decided to do 
a spot fix.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137049267
  
--- Diff: examples/clearwater/scripts/ralf/create.sh ---
@@ -0,0 +1,15 @@
+#!/bin/bash
--- End diff --

In terms of software installation, installing Dime will install 
Ralf+Homestead. However, those components run entirely differently in terms of 
configuration, logging, networking, etc. I imagine that in the multi-node 
Clearwater there will be things in this script to configure networking (open 
ports) for Ralf specifically. So, again, this is left here as a TODO.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137048198
  
--- Diff: examples/clearwater/scripts/bono/delete.sh ---
@@ -0,0 +1,15 @@
+#!/bin/bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
--- End diff --

I'm leaving that as TODOs -- we do need a way to uininstall, too, no?


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137048035
  
--- Diff: examples/clearwater/clearwater-live-test-existing.yaml ---
@@ -0,0 +1,54 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: >-
+  Project Clearwater is an open-source IMS core, developed by Metaswitch 
Networks and released under
+  the GNU GPLv3.
+
+metadata:
+  template_name: clearwater-live-test-existing
+  template_author: ARIA
+  template_version: '1.0'
+  aria_version: '0.1.2'
+
+imports:
+  - types/clearwater.yaml
+  - aria-1.0
+
+topology_template:
+
+  inputs:
+hosts.ssh.user:
+  type: string
+hosts.ssh.password:
--- End diff --

I was thinking forward here to the future template that will have multiple 
hosts. For consistency, I thought to call "hosts." so it would apply to all 
hosts. This is a simpler example with only one host.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137048061
  
--- Diff: examples/clearwater/clearwater-single-existing.yaml ---
@@ -0,0 +1,147 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: >-
+  Project Clearwater is an open-source IMS core, developed by Metaswitch 
Networks and released under
+  the GNU GPLv3.
+
+metadata:
+  template_name: clearwater-single-existing
+  template_author: ARIA
+  template_version: '1.0'
+  aria_version: '0.1.2'
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137047729
  
--- Diff: examples/clearwater/clearwater-live-test-existing.yaml ---
@@ -0,0 +1,54 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: >-
+  Project Clearwater is an open-source IMS core, developed by Metaswitch 
Networks and released under
+  the GNU GPLv3.
+
+metadata:
+  template_name: clearwater-live-test-existing
+  template_author: ARIA
+  template_version: '1.0'
+  aria_version: '0.1.2'
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-08-15 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/191

ARIA-321 Provide Clearwater IMS example

Related fixes included in this commit:

* Allows capabilities, interfaces, and properties to override parent
definition types only if the new type is a descendant of the overridden
type
* Fix bugs in model instrumentation (to allow ctx access to capability
properties and complex values)
* Fix to get_property intrinsic function
* Fix the "required" field for parameters (it wasn't working)
* Don't let scalar values be negative
* Doc fixes related to ARIA-277

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca ARIA-321-clearwater

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/191.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #191


commit 8bb52180a34ca5617cc89c1ac39b152e0bda7d25
Author: Tal Liron 
Date:   2017-07-27T22:58:17Z

ARIA-321 Provide Clearwater IMS example

Related fixes included in this commit:

* Allows capabilities, interfaces, and properties to override parent
definition types only if the new type is a descendant of the overridden
type
* Fix bugs in model instrumentation (to allow ctx access to capability
properties and complex values)
* Fix to get_property intrinsic function
* Fix the "required" field for parameters (it wasn't working)
* Don't let scalar values be negative
* Doc fixes related to ARIA-277




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-08 Thread tliron
Github user tliron closed the pull request at:

https://github.com/apache/incubator-ariatosca/pull/188


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-08 Thread tliron
GitHub user tliron reopened a pull request:

https://github.com/apache/incubator-ariatosca/pull/188

ARIA-324 Refactor ctx proxy access

Our previous use of "." to delimit nested dict keys was wrong (keys
could have a ".") and inflexible. The new implementation uses subsequent
args to move into the dict.

The same format can now be used to access object attributes.

This commit also changes how to support setting values: we must now use
"=" as the penultimate argument with the new value following.

Also fixed: callables will now "grab" the number of args they need
instead of all remaining args, making it possible to do further
inspection on the returned value from the callable. To allow for this,
kwargs are now expected as the first arg rather than the last.

Furthmore, this commit fixes a bad null check in the ctx client, and
also allows it to retrieve Unicode data.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-324-refactor-ctx-access

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/188.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #188


commit 8a3f8b73560350b40d6c2ff47d05f589b0be
Author: Tal Liron 
Date:   2017-07-27T22:58:17Z

ARIA-324 Refactor ctx proxy access

Our previous use of "." to delimit nested dict keys was wrong (keys
could have a ".") and inflexible. The new implementation uses subsequent
args to move into the dict. The same format can now be used to access
object attributes.

This commit also changes how to support setting values: we must now use
"=" as the penultimate argument with the new value following.

Calling functions now requires delimiting function arguments via "[" and
"]" arguments. This makes it possible to continue accessing the return
value of a function after the "]".

Also fixed: callables will now "grab" the number of args they need
instead of all remaining args, making it possible to do further
inspection on the returned value from the callable. To allow for this,
kwargs are now expected as the first arg rather than the last.

Relatedly, this commit instruments all parameter fields from all models
and fixes related bugs in the instrumentation implementation.

Furthmore, this commit fixes a bad null check in the ctx client, and
also allows it to retrieve Unicode data.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-08 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131909427
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -170,101 +178,59 @@ def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-man
 modifying_key = None
 modifying_value = None
 
-num_args = len(args)
-
-while index < num_args:
-arg = args[index]
-
-# Object attribute
-attr = _desugar_attr(current, arg)
-if attr is not None:
-current = getattr(current, attr)
-
-# List entry
-elif isinstance(current, list) and _is_int(arg):
-current = current[int(arg)]
-
-# Dict (or dict-like object) value
-elif hasattr(current, '__getitem__'):
-if modifying and (not arg in current):
-current[arg] = {}
-current = current[arg]
-
-# Call
-elif callable(current):
-if isinstance(current, functools.partial):
-argspec = inspect.getargspec(current.func)
-# Don't count initial args fulfilled by the partial
-spec_args = argspec.args[len(current.args):]
-# Don't count keyword args fulfilled by the partial
-spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
-else:
-argspec = inspect.getargspec(current)
-spec_args = argspec.args
-
-callable_kwargs = {}
-if isinstance(arg, dict):
-# If the first arg is a dict, treat it as our kwargs
-# TODO: what if the first argument really needs to be a 
dict?
-callable_kwargs = arg
-index += 1
-
-if argspec.varargs is not None:
-# Gobble the rest of the args
-callable_args = args[index:]
-else:
-# Take only what we need
-spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
-spec_args_count = len(spec_args)
-if inspect.ismethod(current):
-# Don't count "self" argument
-spec_args_count -= 1
-callable_args = args[index:index + spec_args_count]
-# Note: we might actually have fewer args than the 
args_count, but that could be OK
-# if the user expects subsequent args to have default 
values
-
-args_count = len(callable_args)
-if args_count > 1:
-index += args_count - 1
-
-current = current(*callable_args, **callable_kwargs)
-
-else:
-raise RuntimeError('`{0}` cannot be processed in 
{1}'.format(arg, args))
-
-index += 1
-
-if callable(current):
-current = current()
+# Parse all arguments
+while len(args) > 0:
+obj, args = _parse_argument(obj, args, modifying)
 
 if modifying:
-if hasattr(current, '__setitem__'):
-# Modify dict value
-current[modifying_key] = modifying_value
-else:
+if hasattr(obj, '__setitem__'):
+# Modify item value (dict, list, and similar)
+if isinstance(obj, (list, tuple)):
+modifying_key = int(modifying_key)
+obj[modifying_key] = modifying_value
+elif hasattr(obj, modifying_key):
 # Modify object attribute
-setattr(current, modifying_key, modifying_value)
-
-return current
+setattr(obj, modifying_key, modifying_value)
+else:
+raise ProcessingError('Cannot modify `{0}` of `{1!r}`'
+  .format(modifying_key, obj))
 
+return obj
 
-def _desugar_attr(obj, attr):
-if not isinstance(attr, basestring):
-return None
-if hasattr(obj, attr):
-return attr
-attr = attr.replace('-', '_')
-if hasattr(obj, attr):
-return attr
-return None
 
+def _parse_argument(obj, args, modifying):
+args = list(args)
+arg = args.pop(0)
 
-def _is_int(arg):
-try:
-int(arg)
-except ValueError:
-return False
-return True
+# Call?
+if arg == '[':
+# TODO: should there be a way to escape "[" and 

[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-08 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131909167
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -170,101 +178,59 @@ def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-man
 modifying_key = None
 modifying_value = None
 
-num_args = len(args)
-
-while index < num_args:
-arg = args[index]
-
-# Object attribute
-attr = _desugar_attr(current, arg)
-if attr is not None:
-current = getattr(current, attr)
-
-# List entry
-elif isinstance(current, list) and _is_int(arg):
-current = current[int(arg)]
-
-# Dict (or dict-like object) value
-elif hasattr(current, '__getitem__'):
-if modifying and (not arg in current):
-current[arg] = {}
-current = current[arg]
-
-# Call
-elif callable(current):
-if isinstance(current, functools.partial):
-argspec = inspect.getargspec(current.func)
-# Don't count initial args fulfilled by the partial
-spec_args = argspec.args[len(current.args):]
-# Don't count keyword args fulfilled by the partial
-spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
-else:
-argspec = inspect.getargspec(current)
-spec_args = argspec.args
-
-callable_kwargs = {}
-if isinstance(arg, dict):
-# If the first arg is a dict, treat it as our kwargs
-# TODO: what if the first argument really needs to be a 
dict?
-callable_kwargs = arg
-index += 1
-
-if argspec.varargs is not None:
-# Gobble the rest of the args
-callable_args = args[index:]
-else:
-# Take only what we need
-spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
-spec_args_count = len(spec_args)
-if inspect.ismethod(current):
-# Don't count "self" argument
-spec_args_count -= 1
-callable_args = args[index:index + spec_args_count]
-# Note: we might actually have fewer args than the 
args_count, but that could be OK
-# if the user expects subsequent args to have default 
values
-
-args_count = len(callable_args)
-if args_count > 1:
-index += args_count - 1
-
-current = current(*callable_args, **callable_kwargs)
-
-else:
-raise RuntimeError('`{0}` cannot be processed in 
{1}'.format(arg, args))
-
-index += 1
-
-if callable(current):
-current = current()
+# Parse all arguments
+while len(args) > 0:
+obj, args = _parse_argument(obj, args, modifying)
 
 if modifying:
-if hasattr(current, '__setitem__'):
-# Modify dict value
-current[modifying_key] = modifying_value
-else:
+if hasattr(obj, '__setitem__'):
--- End diff --

Correct, this is *exactly* a case for ducktyping. :) Any object can be 
"dict-like" here by just implementing this magic method, and ctx will be happy.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131703940
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -170,101 +178,59 @@ def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-man
 modifying_key = None
 modifying_value = None
 
-num_args = len(args)
-
-while index < num_args:
-arg = args[index]
-
-# Object attribute
-attr = _desugar_attr(current, arg)
-if attr is not None:
-current = getattr(current, attr)
-
-# List entry
-elif isinstance(current, list) and _is_int(arg):
-current = current[int(arg)]
-
-# Dict (or dict-like object) value
-elif hasattr(current, '__getitem__'):
-if modifying and (not arg in current):
-current[arg] = {}
-current = current[arg]
-
-# Call
-elif callable(current):
-if isinstance(current, functools.partial):
-argspec = inspect.getargspec(current.func)
-# Don't count initial args fulfilled by the partial
-spec_args = argspec.args[len(current.args):]
-# Don't count keyword args fulfilled by the partial
-spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
-else:
-argspec = inspect.getargspec(current)
-spec_args = argspec.args
-
-callable_kwargs = {}
-if isinstance(arg, dict):
-# If the first arg is a dict, treat it as our kwargs
-# TODO: what if the first argument really needs to be a 
dict?
-callable_kwargs = arg
-index += 1
-
-if argspec.varargs is not None:
-# Gobble the rest of the args
-callable_args = args[index:]
-else:
-# Take only what we need
-spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
-spec_args_count = len(spec_args)
-if inspect.ismethod(current):
-# Don't count "self" argument
-spec_args_count -= 1
-callable_args = args[index:index + spec_args_count]
-# Note: we might actually have fewer args than the 
args_count, but that could be OK
-# if the user expects subsequent args to have default 
values
-
-args_count = len(callable_args)
-if args_count > 1:
-index += args_count - 1
-
-current = current(*callable_args, **callable_kwargs)
-
-else:
-raise RuntimeError('`{0}` cannot be processed in 
{1}'.format(arg, args))
-
-index += 1
-
-if callable(current):
-current = current()
+# Parse all arguments
+while len(args) > 0:
+obj, args = _parse_argument(obj, args, modifying)
 
 if modifying:
-if hasattr(current, '__setitem__'):
-# Modify dict value
-current[modifying_key] = modifying_value
-else:
+if hasattr(obj, '__setitem__'):
+# Modify item value (dict, list, and similar)
+if isinstance(obj, (list, tuple)):
+modifying_key = int(modifying_key)
+obj[modifying_key] = modifying_value
+elif hasattr(obj, modifying_key):
 # Modify object attribute
-setattr(current, modifying_key, modifying_value)
-
-return current
+setattr(obj, modifying_key, modifying_value)
+else:
+raise ProcessingError('Cannot modify `{0}` of `{1!r}`'
+  .format(modifying_key, obj))
 
+return obj
 
-def _desugar_attr(obj, attr):
-if not isinstance(attr, basestring):
-return None
-if hasattr(obj, attr):
-return attr
-attr = attr.replace('-', '_')
-if hasattr(obj, attr):
-return attr
-return None
 
+def _parse_argument(obj, args, modifying):
+args = list(args)
+arg = args.pop(0)
 
-def _is_int(arg):
-try:
-int(arg)
-except ValueError:
-return False
-return True
+# Call?
+if arg == '[':
+# TODO: should there be a way to escape "[" and 

[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131703163
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -170,101 +178,59 @@ def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-man
 modifying_key = None
 modifying_value = None
 
-num_args = len(args)
-
-while index < num_args:
-arg = args[index]
-
-# Object attribute
-attr = _desugar_attr(current, arg)
-if attr is not None:
-current = getattr(current, attr)
-
-# List entry
-elif isinstance(current, list) and _is_int(arg):
-current = current[int(arg)]
-
-# Dict (or dict-like object) value
-elif hasattr(current, '__getitem__'):
-if modifying and (not arg in current):
-current[arg] = {}
-current = current[arg]
-
-# Call
-elif callable(current):
-if isinstance(current, functools.partial):
-argspec = inspect.getargspec(current.func)
-# Don't count initial args fulfilled by the partial
-spec_args = argspec.args[len(current.args):]
-# Don't count keyword args fulfilled by the partial
-spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
-else:
-argspec = inspect.getargspec(current)
-spec_args = argspec.args
-
-callable_kwargs = {}
-if isinstance(arg, dict):
-# If the first arg is a dict, treat it as our kwargs
-# TODO: what if the first argument really needs to be a 
dict?
-callable_kwargs = arg
-index += 1
-
-if argspec.varargs is not None:
-# Gobble the rest of the args
-callable_args = args[index:]
-else:
-# Take only what we need
-spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
-spec_args_count = len(spec_args)
-if inspect.ismethod(current):
-# Don't count "self" argument
-spec_args_count -= 1
-callable_args = args[index:index + spec_args_count]
-# Note: we might actually have fewer args than the 
args_count, but that could be OK
-# if the user expects subsequent args to have default 
values
-
-args_count = len(callable_args)
-if args_count > 1:
-index += args_count - 1
-
-current = current(*callable_args, **callable_kwargs)
-
-else:
-raise RuntimeError('`{0}` cannot be processed in 
{1}'.format(arg, args))
-
-index += 1
-
-if callable(current):
-current = current()
+# Parse all arguments
+while len(args) > 0:
+obj, args = _parse_argument(obj, args, modifying)
 
 if modifying:
-if hasattr(current, '__setitem__'):
-# Modify dict value
-current[modifying_key] = modifying_value
-else:
+if hasattr(obj, '__setitem__'):
+# Modify item value (dict, list, and similar)
+if isinstance(obj, (list, tuple)):
+modifying_key = int(modifying_key)
+obj[modifying_key] = modifying_value
+elif hasattr(obj, modifying_key):
 # Modify object attribute
-setattr(current, modifying_key, modifying_value)
-
-return current
+setattr(obj, modifying_key, modifying_value)
+else:
+raise ProcessingError('Cannot modify `{0}` of `{1!r}`'
+  .format(modifying_key, obj))
 
+return obj
 
-def _desugar_attr(obj, attr):
-if not isinstance(attr, basestring):
-return None
-if hasattr(obj, attr):
-return attr
-attr = attr.replace('-', '_')
-if hasattr(obj, attr):
-return attr
-return None
 
+def _parse_argument(obj, args, modifying):
+args = list(args)
+arg = args.pop(0)
 
-def _is_int(arg):
-try:
-int(arg)
-except ValueError:
-return False
-return True
+# Call?
+if arg == '[':
+# TODO: should there be a way to escape "[" and 

[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131702731
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -170,101 +178,59 @@ def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-man
 modifying_key = None
 modifying_value = None
 
-num_args = len(args)
-
-while index < num_args:
-arg = args[index]
-
-# Object attribute
-attr = _desugar_attr(current, arg)
-if attr is not None:
-current = getattr(current, attr)
-
-# List entry
-elif isinstance(current, list) and _is_int(arg):
-current = current[int(arg)]
-
-# Dict (or dict-like object) value
-elif hasattr(current, '__getitem__'):
-if modifying and (not arg in current):
-current[arg] = {}
-current = current[arg]
-
-# Call
-elif callable(current):
-if isinstance(current, functools.partial):
-argspec = inspect.getargspec(current.func)
-# Don't count initial args fulfilled by the partial
-spec_args = argspec.args[len(current.args):]
-# Don't count keyword args fulfilled by the partial
-spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
-else:
-argspec = inspect.getargspec(current)
-spec_args = argspec.args
-
-callable_kwargs = {}
-if isinstance(arg, dict):
-# If the first arg is a dict, treat it as our kwargs
-# TODO: what if the first argument really needs to be a 
dict?
-callable_kwargs = arg
-index += 1
-
-if argspec.varargs is not None:
-# Gobble the rest of the args
-callable_args = args[index:]
-else:
-# Take only what we need
-spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
-spec_args_count = len(spec_args)
-if inspect.ismethod(current):
-# Don't count "self" argument
-spec_args_count -= 1
-callable_args = args[index:index + spec_args_count]
-# Note: we might actually have fewer args than the 
args_count, but that could be OK
-# if the user expects subsequent args to have default 
values
-
-args_count = len(callable_args)
-if args_count > 1:
-index += args_count - 1
-
-current = current(*callable_args, **callable_kwargs)
-
-else:
-raise RuntimeError('`{0}` cannot be processed in 
{1}'.format(arg, args))
-
-index += 1
-
-if callable(current):
-current = current()
+# Parse all arguments
+while len(args) > 0:
+obj, args = _parse_argument(obj, args, modifying)
 
 if modifying:
-if hasattr(current, '__setitem__'):
-# Modify dict value
-current[modifying_key] = modifying_value
-else:
+if hasattr(obj, '__setitem__'):
+# Modify item value (dict, list, and similar)
+if isinstance(obj, (list, tuple)):
+modifying_key = int(modifying_key)
+obj[modifying_key] = modifying_value
+elif hasattr(obj, modifying_key):
 # Modify object attribute
-setattr(current, modifying_key, modifying_value)
-
-return current
+setattr(obj, modifying_key, modifying_value)
+else:
+raise ProcessingError('Cannot modify `{0}` of `{1!r}`'
+  .format(modifying_key, obj))
 
+return obj
 
-def _desugar_attr(obj, attr):
-if not isinstance(attr, basestring):
-return None
-if hasattr(obj, attr):
-return attr
-attr = attr.replace('-', '_')
-if hasattr(obj, attr):
-return attr
-return None
 
+def _parse_argument(obj, args, modifying):
--- End diff --

But you wouldn't know in advance which function to call until you start 
parsing. I renamed it and did some little cleanups to add clarity.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this 

[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131700894
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,17 +146,29 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
-current = ctx
-index = 0
+class ProcessingError(RuntimeError):
+pass
+
 
+class ParsingError(ProcessingError):
+pass
+
+
+def _parse_request(ctx, request):
+request = json.loads(request)
+args = request['args']
+return _parse_arguments(ctx, args)
+
+
+def _parse_arguments(obj, args):
+# Modifying?
 try:
 # TODO: should there be a way to escape "=" in case it is needed 
as real argument?
-equals_index = args.index('=')
+equals_index = args.index('=') # raises ValueError if not found
--- End diff --

It looks more awkward, but +1.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131700830
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -170,101 +178,59 @@ def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-man
 modifying_key = None
 modifying_value = None
 
-num_args = len(args)
-
-while index < num_args:
-arg = args[index]
-
-# Object attribute
-attr = _desugar_attr(current, arg)
-if attr is not None:
-current = getattr(current, attr)
-
-# List entry
-elif isinstance(current, list) and _is_int(arg):
-current = current[int(arg)]
-
-# Dict (or dict-like object) value
-elif hasattr(current, '__getitem__'):
-if modifying and (not arg in current):
-current[arg] = {}
-current = current[arg]
-
-# Call
-elif callable(current):
-if isinstance(current, functools.partial):
-argspec = inspect.getargspec(current.func)
-# Don't count initial args fulfilled by the partial
-spec_args = argspec.args[len(current.args):]
-# Don't count keyword args fulfilled by the partial
-spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
-else:
-argspec = inspect.getargspec(current)
-spec_args = argspec.args
-
-callable_kwargs = {}
-if isinstance(arg, dict):
-# If the first arg is a dict, treat it as our kwargs
-# TODO: what if the first argument really needs to be a 
dict?
-callable_kwargs = arg
-index += 1
-
-if argspec.varargs is not None:
-# Gobble the rest of the args
-callable_args = args[index:]
-else:
-# Take only what we need
-spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
-spec_args_count = len(spec_args)
-if inspect.ismethod(current):
-# Don't count "self" argument
-spec_args_count -= 1
-callable_args = args[index:index + spec_args_count]
-# Note: we might actually have fewer args than the 
args_count, but that could be OK
-# if the user expects subsequent args to have default 
values
-
-args_count = len(callable_args)
-if args_count > 1:
-index += args_count - 1
-
-current = current(*callable_args, **callable_kwargs)
-
-else:
-raise RuntimeError('`{0}` cannot be processed in 
{1}'.format(arg, args))
-
-index += 1
-
-if callable(current):
-current = current()
+# Parse all arguments
+while len(args) > 0:
+obj, args = _parse_argument(obj, args, modifying)
 
 if modifying:
-if hasattr(current, '__setitem__'):
-# Modify dict value
-current[modifying_key] = modifying_value
-else:
+if hasattr(obj, '__setitem__'):
--- End diff --

Cannot, because it wouldn't support dict-like objects like instrumented 
wrapper classes. This is generally better because it would support anything 
that supported __get_item__ and __set_item__, which allows for any kind of 
classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131693627
  
--- Diff: tests/orchestrator/execution_plugin/test_ctx_proxy_server.py ---
@@ -55,61 +55,32 @@ def test_dict_prop_access_set(self, server, ctx):
 assert ctx.node.properties['prop3'][2]['value'] == 'new_value_2'
 assert ctx.node.properties['prop4']['some']['new']['path'] == 
'some_new_value'
 
+def test_dict_prop_access_set_with_list_index(self, server, ctx):
--- End diff --

+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131693585
  
--- Diff: examples/hello-world/scripts/stop.sh ---
@@ -16,14 +16,13 @@
 
 set -e
 
-TEMP_DIR="/tmp"
-PYTHON_FILE_SERVER_ROOT=${TEMP_DIR}/python-simple-http-webserver
-PID_FILE="server.pid"
+TEMP_DIR=/tmp
+PYTHON_FILE_SERVER_ROOT="${TEMP_DIR}/python-simple-http-webserver"
+PID_FILE=server.pid
+PID=$(cat "$PYTHON_FILE_SERVER_ROOT/$PID_FILE")
 
-PID=`cat ${PYTHON_FILE_SERVER_ROOT}/${PID_FILE}`
+ctx logger info [ "Shutting down web server, pid = ${PID}." ]
+kill -9 "$PID" || exit $?
 
-ctx logger info "Shutting down web server, pid = ${PID}."
-kill -9 ${PID} || exit $?
-
-ctx logger info "Removing web server root folder: 
${PYTHON_FILE_SERVER_ROOT}."
-rm -rf ${PYTHON_FILE_SERVER_ROOT}
+ctx logger info [ "Removing web server root folder: 
$PYTHON_FILE_SERVER_ROOT." ]
--- End diff --

Heh, not 100% sure I agree with that, but it does look clear. :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-31 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130399573
  
--- Diff: aria/storage/collection_instrumentation.py ---
@@ -213,32 +235,32 @@ def __init__(self, mapi, *args, **kwargs):
 """
 The original model.
 
-:param wrapped: model to be instrumented
 :param mapi: MAPI for the wrapped model
+:param wrapped: model to be instrumented
+:param instrumentation: instrumentation dict
+:param instrumentation_kwargs: arguments for instrumentation class
 """
-super(_InstrumentedModel, self).__init__(*args, **kwargs)
+super(_InstrumentedModel, 
self).__init__(instrumentation_kwargs=dict(mapi=mapi),
+ *args, **kwargs)
 self._mapi = mapi
 self._apply_instrumentation()
 
-def __getattr__(self, item):
-return_value = getattr(self._wrapped, item)
-if isinstance(return_value, self._wrapped.__class__):
-return _create_instrumented_model(return_value, self._mapi, 
self._instrumentation)
-if isinstance(return_value, (list, dict)):
-return _create_wrapped_model(return_value, self._mapi, 
self._instrumentation)
-return return_value
-
 def _apply_instrumentation(self):
 for field in self._instrumentation:
+if field.parent.class_ != type(self._wrapped):
--- End diff --

To fix: check by isubclass instead of equality


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca issue #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/188
  
Hey guys, I suggest going over the complex use cases in the [Clearwater 
example](https://github.com/apache/incubator-ariatosca/blob/ARIA-321-clearwater/examples/clearwater/scripts/host/configure.sh).

Basically all the changes in the PR were inspired by those real-world needs 
of `ctx`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247622
  
--- Diff: tests/orchestrator/execution_plugin/test_ctx_proxy_server.py ---
@@ -39,26 +39,26 @@ def test_dict_prop_access_get_key(self, server):
 assert response == 'value1'
 
 def test_dict_prop_access_get_key_nested(self, server):
-response = self.request(server, 'node', 'properties', 
'prop2.nested_prop1')
+response = self.request(server, 'node', 'properties', 'prop2', 
'nested_prop1')
 assert response == 'nested_value1'
 
 def test_dict_prop_access_get_with_list_index(self, server):
-response = self.request(server, 'node', 'properties', 
'prop3[2].value')
+response = self.request(server, 'node', 'properties', 'prop3', 2, 
'value')
 assert response == 'value_2'
 
--- End diff --

+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247620
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
+if equals_index == 0:
+raise RuntimeError('The "=" argument cannot be first')
+if equals_index != len(args) - 2:
+raise RuntimeError('The "=" argument must be penultimate')
+modifying = True
+modifying_key = args[-3]
+modifying_value = args[-1]
+args = args[:-3]
+except ValueError:
+modifying = False
+modifying_key = None
+modifying_value = None
+
+num_args = len(args)
+
 while index < num_args:
 arg = args[index]
+
+# Object attribute
 attr = _desugar_attr(current, arg)
-if attr:
+if attr is not None:
 current = getattr(current, attr)
-elif isinstance(current, collections.MutableMapping):
-key = arg
-path_dict = _PathDictAccess(current)
-if index + 1 == num_args:
-# read dict prop by path
-value = path_dict.get(key)
-current = value
-elif index + 2 == num_args:
-# set dict prop by path
-value = args[index + 1]
-path_dict.set(key, value)
-current = None
-else:
-raise RuntimeError('Illegal argument while accessing dict')
-break
+
+# List entry
+elif isinstance(current, list) and _is_int(arg):
+current = current[int(arg)]
+
+# Dict (or dict-like object) value
+elif hasattr(current, '__getitem__'):
+if modifying and (not arg in current):
+current[arg] = {}
+current = current[arg]
+
+# Call
 elif callable(current):
-kwargs = {}
-remaining_args = args[index:]
-if isinstance(remaining_args[-1], collections.MutableMapping):
-kwargs = remaining_args[-1]
-remaining_args = remaining_args[:-1]
-current = current(*remaining_args, **kwargs)
-break
+if isinstance(current, functools.partial):
+argspec = inspect.getargspec(current.func)
+# Don't count initial args fulfilled by the partial
+spec_args = argspec.args[len(current.args):]
+# Don't count keyword args fulfilled by the partial
+spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
+else:
+argspec = inspect.getargspec(current)
+spec_args = argspec.args
+
+callable_kwargs = {}
+if isinstance(arg, dict):
+# If the first arg is a dict, treat it as our kwargs
+# TODO: what if the first argument really needs to be a 
dict?
+callable_kwargs = arg
+index += 1
+
+if argspec.varargs is not None:
+# Gobble the rest of the args
+callable_args = args[index:]
+else:
+# Take only what we need
+spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
+spec_args_count = len(spec_args)
+if inspect.ismethod(current):
+# Don't count "self" argument
+spec_args_count -= 1
+callable_args = args[index:index + spec_args_count]
+# Note: we might actually have fewer args than the 
args_count, but that could be OK
+# if the user expects subsequent args to have default 
values
+
+args_count = len(callable_args)
+if args_count > 1:
+index += args_count - 1
+
+current = current(*callable_args, **callable_kwargs)
+
 else:
-raise RuntimeError('{

[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247609
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
+if equals_index == 0:
+raise RuntimeError('The "=" argument cannot be first')
+if equals_index != len(args) - 2:
+raise RuntimeError('The "=" argument must be penultimate')
+modifying = True
+modifying_key = args[-3]
+modifying_value = args[-1]
+args = args[:-3]
+except ValueError:
+modifying = False
+modifying_key = None
+modifying_value = None
+
+num_args = len(args)
+
 while index < num_args:
 arg = args[index]
+
+# Object attribute
 attr = _desugar_attr(current, arg)
-if attr:
+if attr is not None:
 current = getattr(current, attr)
-elif isinstance(current, collections.MutableMapping):
-key = arg
-path_dict = _PathDictAccess(current)
-if index + 1 == num_args:
-# read dict prop by path
-value = path_dict.get(key)
-current = value
-elif index + 2 == num_args:
-# set dict prop by path
-value = args[index + 1]
-path_dict.set(key, value)
-current = None
-else:
-raise RuntimeError('Illegal argument while accessing dict')
-break
+
+# List entry
+elif isinstance(current, list) and _is_int(arg):
+current = current[int(arg)]
+
+# Dict (or dict-like object) value
+elif hasattr(current, '__getitem__'):
+if modifying and (not arg in current):
+current[arg] = {}
+current = current[arg]
+
+# Call
 elif callable(current):
-kwargs = {}
-remaining_args = args[index:]
-if isinstance(remaining_args[-1], collections.MutableMapping):
-kwargs = remaining_args[-1]
-remaining_args = remaining_args[:-1]
-current = current(*remaining_args, **kwargs)
-break
+if isinstance(current, functools.partial):
+argspec = inspect.getargspec(current.func)
+# Don't count initial args fulfilled by the partial
+spec_args = argspec.args[len(current.args):]
+# Don't count keyword args fulfilled by the partial
+spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
+else:
+argspec = inspect.getargspec(current)
+spec_args = argspec.args
+
+callable_kwargs = {}
+if isinstance(arg, dict):
+# If the first arg is a dict, treat it as our kwargs
+# TODO: what if the first argument really needs to be a 
dict?
+callable_kwargs = arg
+index += 1
+
+if argspec.varargs is not None:
+# Gobble the rest of the args
+callable_args = args[index:]
+else:
+# Take only what we need
+spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
+spec_args_count = len(spec_args)
+if inspect.ismethod(current):
+# Don't count "self" argument
+spec_args_count -= 1
+callable_args = args[index:index + spec_args_count]
--- End diff --

See comment above: we can't just grab all remaining args, because we might 
need these args for further delving into the returned value of the callable.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247616
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
--- End diff --

+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247602
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
+if equals_index == 0:
+raise RuntimeError('The "=" argument cannot be first')
+if equals_index != len(args) - 2:
+raise RuntimeError('The "=" argument must be penultimate')
+modifying = True
+modifying_key = args[-3]
+modifying_value = args[-1]
+args = args[:-3]
+except ValueError:
+modifying = False
+modifying_key = None
+modifying_value = None
+
+num_args = len(args)
+
 while index < num_args:
 arg = args[index]
+
+# Object attribute
 attr = _desugar_attr(current, arg)
-if attr:
+if attr is not None:
 current = getattr(current, attr)
-elif isinstance(current, collections.MutableMapping):
-key = arg
-path_dict = _PathDictAccess(current)
-if index + 1 == num_args:
-# read dict prop by path
-value = path_dict.get(key)
-current = value
-elif index + 2 == num_args:
-# set dict prop by path
-value = args[index + 1]
-path_dict.set(key, value)
-current = None
-else:
-raise RuntimeError('Illegal argument while accessing dict')
-break
+
+# List entry
+elif isinstance(current, list) and _is_int(arg):
+current = current[int(arg)]
+
+# Dict (or dict-like object) value
+elif hasattr(current, '__getitem__'):
+if modifying and (not arg in current):
+current[arg] = {}
+current = current[arg]
+
+# Call
 elif callable(current):
-kwargs = {}
-remaining_args = args[index:]
-if isinstance(remaining_args[-1], collections.MutableMapping):
-kwargs = remaining_args[-1]
-remaining_args = remaining_args[:-1]
-current = current(*remaining_args, **kwargs)
-break
+if isinstance(current, functools.partial):
--- End diff --

+1

The main reason for all of this is that we sometimes need to continue 
access properties on returned values on callables. The previous implementation 
simpled "gobbled" all remaining args, but we might need those args. Example 
from Clearwater:

`ctx service get_policy_by_type clearwater.Configuration properties zone`

But generally I suggest you take a look at the various uses of `ctx` in the 
[Clearwater 
example](https://github.com/apache/incubator-ariatosca/blob/ARIA-321-clearwater/examples/clearwater/scripts/host/configure.sh).






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247513
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
+if equals_index == 0:
+raise RuntimeError('The "=" argument cannot be first')
+if equals_index != len(args) - 2:
+raise RuntimeError('The "=" argument must be penultimate')
+modifying = True
+modifying_key = args[-3]
+modifying_value = args[-1]
+args = args[:-3]
+except ValueError:
+modifying = False
+modifying_key = None
+modifying_value = None
+
+num_args = len(args)
+
 while index < num_args:
 arg = args[index]
+
+# Object attribute
 attr = _desugar_attr(current, arg)
-if attr:
+if attr is not None:
 current = getattr(current, attr)
-elif isinstance(current, collections.MutableMapping):
-key = arg
-path_dict = _PathDictAccess(current)
-if index + 1 == num_args:
-# read dict prop by path
-value = path_dict.get(key)
-current = value
-elif index + 2 == num_args:
-# set dict prop by path
-value = args[index + 1]
-path_dict.set(key, value)
-current = None
-else:
-raise RuntimeError('Illegal argument while accessing dict')
-break
+
+# List entry
+elif isinstance(current, list) and _is_int(arg):
+current = current[int(arg)]
+
+# Dict (or dict-like object) value
+elif hasattr(current, '__getitem__'):
+if modifying and (not arg in current):
+current[arg] = {}
+current = current[arg]
+
+# Call
 elif callable(current):
-kwargs = {}
-remaining_args = args[index:]
-if isinstance(remaining_args[-1], collections.MutableMapping):
-kwargs = remaining_args[-1]
-remaining_args = remaining_args[:-1]
-current = current(*remaining_args, **kwargs)
-break
+if isinstance(current, functools.partial):
--- End diff --

This happens a lot, actually. I forget which of our tests would showcase 
this exactly, but I think even `ctx logger` is based on partials.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247497
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
+if equals_index == 0:
+raise RuntimeError('The "=" argument cannot be first')
+if equals_index != len(args) - 2:
+raise RuntimeError('The "=" argument must be penultimate')
+modifying = True
+modifying_key = args[-3]
+modifying_value = args[-1]
+args = args[:-3]
+except ValueError:
+modifying = False
+modifying_key = None
+modifying_value = None
+
+num_args = len(args)
+
 while index < num_args:
 arg = args[index]
+
+# Object attribute
 attr = _desugar_attr(current, arg)
-if attr:
+if attr is not None:
 current = getattr(current, attr)
-elif isinstance(current, collections.MutableMapping):
-key = arg
-path_dict = _PathDictAccess(current)
-if index + 1 == num_args:
-# read dict prop by path
-value = path_dict.get(key)
-current = value
-elif index + 2 == num_args:
-# set dict prop by path
-value = args[index + 1]
-path_dict.set(key, value)
-current = None
-else:
-raise RuntimeError('Illegal argument while accessing dict')
-break
+
+# List entry
+elif isinstance(current, list) and _is_int(arg):
+current = current[int(arg)]
+
+# Dict (or dict-like object) value
+elif hasattr(current, '__getitem__'):
+if modifying and (not arg in current):
--- End diff --

I don't understand this comment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247490
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
+if equals_index == 0:
+raise RuntimeError('The "=" argument cannot be first')
+if equals_index != len(args) - 2:
+raise RuntimeError('The "=" argument must be penultimate')
+modifying = True
+modifying_key = args[-3]
+modifying_value = args[-1]
+args = args[:-3]
+except ValueError:
+modifying = False
+modifying_key = None
+modifying_value = None
+
+num_args = len(args)
+
 while index < num_args:
 arg = args[index]
+
+# Object attribute
 attr = _desugar_attr(current, arg)
-if attr:
+if attr is not None:
 current = getattr(current, attr)
-elif isinstance(current, collections.MutableMapping):
-key = arg
-path_dict = _PathDictAccess(current)
-if index + 1 == num_args:
-# read dict prop by path
-value = path_dict.get(key)
-current = value
-elif index + 2 == num_args:
-# set dict prop by path
-value = args[index + 1]
-path_dict.set(key, value)
-current = None
-else:
-raise RuntimeError('Illegal argument while accessing dict')
-break
+
+# List entry
+elif isinstance(current, list) and _is_int(arg):
+current = current[int(arg)]
+
+# Dict (or dict-like object) value
+elif hasattr(current, '__getitem__'):
+if modifying and (not arg in current):
--- End diff --

+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247485
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
+if equals_index == 0:
+raise RuntimeError('The "=" argument cannot be first')
+if equals_index != len(args) - 2:
+raise RuntimeError('The "=" argument must be penultimate')
+modifying = True
+modifying_key = args[-3]
+modifying_value = args[-1]
+args = args[:-3]
+except ValueError:
+modifying = False
+modifying_key = None
+modifying_value = None
+
+num_args = len(args)
+
 while index < num_args:
 arg = args[index]
+
+# Object attribute
 attr = _desugar_attr(current, arg)
-if attr:
+if attr is not None:
 current = getattr(current, attr)
-elif isinstance(current, collections.MutableMapping):
-key = arg
-path_dict = _PathDictAccess(current)
-if index + 1 == num_args:
-# read dict prop by path
-value = path_dict.get(key)
-current = value
-elif index + 2 == num_args:
-# set dict prop by path
-value = args[index + 1]
-path_dict.set(key, value)
-current = None
-else:
-raise RuntimeError('Illegal argument while accessing dict')
-break
+
+# List entry
+elif isinstance(current, list) and _is_int(arg):
+current = current[int(arg)]
+
+# Dict (or dict-like object) value
--- End diff --

Because that is broken on wrapped instrumented models...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247470
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
--- End diff --

This was the whole point of this PR.

The previous implementation assumed either one arg or two: one arg meant 
"read" and two args meant "write". But this required us to support dot notation 
so that the one arg could refer to a deep nested attribute. Removing dot 
notation requires a different to support "write".

This PR also supports working on the return value of function calls.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247255
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
--- End diff --

Of course we can, but it doesn't seem a very long function at all. This is 
one of those cases where PyLint is not being helpful.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247244
  
--- Diff: aria/storage/collection_instrumentation.py ---
@@ -213,32 +235,32 @@ def __init__(self, mapi, *args, **kwargs):
 """
 The original model.
 
-:param wrapped: model to be instrumented
 :param mapi: MAPI for the wrapped model
+:param wrapped: model to be instrumented
+:param instrumentation: instrumentation dict
+:param instrumentation_kwargs: arguments for instrumentation class
 """
-super(_InstrumentedModel, self).__init__(*args, **kwargs)
+super(_InstrumentedModel, 
self).__init__(instrumentation_kwargs=dict(mapi=mapi),
+ *args, **kwargs)
 self._mapi = mapi
 self._apply_instrumentation()
 
-def __getattr__(self, item):
-return_value = getattr(self._wrapped, item)
-if isinstance(return_value, self._wrapped.__class__):
-return _create_instrumented_model(return_value, self._mapi, 
self._instrumentation)
-if isinstance(return_value, (list, dict)):
-return _create_wrapped_model(return_value, self._mapi, 
self._instrumentation)
-return return_value
-
 def _apply_instrumentation(self):
 for field in self._instrumentation:
+if field.parent.class_ != type(self._wrapped):
--- End diff --

It doesn't seem to break this in tests.

Without this check, the only check is by field name, which caused a lot of 
breakage. You would node properties being instrumented for capability 
properties. Again, consider the use case of `ctx node capabilities host 
properties username`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247176
  
--- Diff: aria/orchestrator/workflows/api/task.py ---
@@ -140,13 +140,18 @@ def __init__(self,
 self.arguments = modeling_utils.merge_parameter_values(arguments,

operation.arguments,

model_cls=models.Argument)
-if getattr(self.actor, 'outbound_relationships', None) is not None:
+
+actor = self.actor
+if hasattr(actor, '_wrapped'):
--- End diff --

Because you often get instrumented/wrapped models here, and they would fail 
the `isinstance` check. (With duck typing check they succeed, but I thought 
duck typing was bad.)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247158
  
--- Diff: aria/orchestrator/workflows/api/task.py ---
@@ -140,13 +140,18 @@ def __init__(self,
 self.arguments = modeling_utils.merge_parameter_values(arguments,

operation.arguments,

model_cls=models.Argument)
-if getattr(self.actor, 'outbound_relationships', None) is not None:
--- End diff --

Why not import models? This should have been explained in comments. Duck 
typing seems very inaccurate and prone to errors.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247107
  
--- Diff: aria/orchestrator/context/common.py ---
@@ -38,10 +38,27 @@ class BaseContext(object):
 """
 
 INSTRUMENTATION_FIELDS = (
+modeling.models.Service.inputs,
--- End diff --

Because you should be able to do things like: {{ctx node capabilities host 
properties username}}

The ctx proxy should let you access all models.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx access

2017-07-27 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/188

ARIA-324 Refactor ctx access

Our previous use of "." to delimit nested dict keys was wrong (keys
could have a ".") and inflexible. The new implementation uses subsequent
args to move into the dict.

The same format can now be used to access object attributes.

This commit also changes how to support setting values: we must now use
"=" as the penultimate argument with the new value following.

Also fixed: callables will now "grab" the number of args they need
instead of all remaining args, making it possible to do further
inspection on the returned value from the callable. To allow for this,
kwargs are now expected as the first arg rather than the last.

Furthmore, this commit fixes a bad null check in the ctx client, and
also allows it to retrieve Unicode data.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-324-refactor-ctx-access

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/188.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #188


commit 956e98b20874b99e80b12849b3c10dc869a4b4f6
Author: Tal Liron 
Date:   2017-07-27T22:58:17Z

ARIA-324 Refactor ctx proxy access

Our previous use of "." to delimit nested dict keys was wrong (keys
could have a ".") and inflexible. The new implementation uses subsequent
args to move into the dict. The same format can now be used to access
object attributes.

This commit also changes how to support setting values: we must now use
"=" as the penultimate argument with the new value following.

Also fixed: callables will now "grab" the number of args they need
instead of all remaining args, making it possible to do further
inspection on the returned value from the callable. To allow for this,
kwargs are now expected as the first arg rather than the last.

Relatedly, this commit instruments all parameter fields from all models
and fixes related bugs in the instrumentation implementation.

Furthmore, this commit fixes a bad null check in the ctx client, and
also allows it to retrieve Unicode data.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >