Hi, Clark:

Thanks a lot for the prompt response! I added the OS_TEST_TIMEOUT value (300 
sec) and was tailing the tmp file. It turned out that the TOX run stopped at 
the following point. My machine was tossed so badly that it became unresponsive 
and I had to hard reboot it….I am pulling my teeth off now…..Is it normal to 
see Traceback?

2014-02-27 21:33:51,212     INFO [neutron.api.extensions] Extension 'agent' 
provides no backward compatibility map for extended attributes
2014-02-27 21:33:51,212     INFO [neutron.api.extensions] Extension 'Allowed 
Address Pairs' provides no backward compatibility map for extended attributes
2014-02-27 21:33:51,212     INFO [neutron.api.extensions] Extension 'Neutron 
Extra Route' provides no backward compatibility map for extended attributes
2014-02-27 21:33:51,522    ERROR 
[neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] No DHCP agents are 
associated with network '397fab50-26aa-4cb7-8aa4-c4d43909a00b'. Unable to send 
notification for 'network_create_end' with payload: {'network': {'status': 
'ACTIVE', 'subnets': [], 'name': 'net1', 'provider:physical_network': 
u'physnet1', 'admin_state_up': True, 'tenant_id': 'test-tenant', 
'provider:network_type': 'vlan', 'shared': False, 'id': 
'397fab50-26aa-4cb7-8aa4-c4d43909a00b', 'provider:segmentation_id': 1000}}
2014-02-27 21:33:51,567    ERROR [neutron.api.v2.resource] create failed
Traceback (most recent call last):
  File "neutron/api/v2/resource.py", line 84, in resource
    result = method(request=request, **args)
  File "neutron/api/v2/base.py", line 347, in create
    allow_bulk=self._allow_bulk)
  File "neutron/api/v2/base.py", line 600, in prepare_request_body
    raise webob.exc.HTTPBadRequest(msg)
HTTPBadRequest: Invalid input for cidr. Reason: '10.0.2.0' isn't a recognized 
IP subnet cidr, '10.0.2.0/32' is recommended.


Thanks again!

Shixiong





Shixiong Shang

!--- Stay Hungry, Stay Foolish ---!

On Feb 27, 2014, at 8:28 PM, Clark Boylan <clark.boy...@gmail.com> wrote:

> On Thu, Feb 27, 2014 at 4:43 PM, Shixiong Shang
> <sparkofwisdom.cl...@gmail.com> wrote:
>> Hi, guys:
>> 
>> I created a fresh local repository and pulled the most recent Neutron code. 
>> Before I put in my own code, I did TOX run. However, seems like it is stuck 
>> to the following condition for over a hour and didn't go any further. 
>> Yesterday, the TOX had been running with a fresh copy of Neutron, but didn't 
>> return SUCCESS after the entire night.
>> 
>> I assume the copy from MASTER BRANCH should already be 
>> sanitized.....However, what I saw in the past 48 hours told me different 
>> story. Did I do anything wrong?
>> 
>> 
>> shshang@net-ubuntu2:~/github/neutron$ tox -e py27
>> py27 create: /home/shshang/github/neutron/.tox/py27
>> py27 installdeps: -r/home/shshang/github/neutron/requirements.txt, 
>> -r/home/shshang/github/neutron/test-requirements.txt, setuptools_git>=0.4
>> py27 develop-inst: /home/shshang/github/neutron
>> py27 runtests: commands[0] | python -m neutron.openstack.common.lockutils 
>> python setup.py testr --slowest --testr-args=
>> [pbr] Excluding argparse: Python 2.6 only dependency
>> running testr
>> running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
>> ${PYTHON:-python} -m subunit.run discover -t ./ 
>> ${OS_TEST_PATH:-./neutron/tests/unit} --list
>> running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
>> ${PYTHON:-python} -m subunit.run discover -t ./ 
>> ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpbZwLwg
>> running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
>> ${PYTHON:-python} -m subunit.run discover -t ./ 
>> ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmp39qJYM
>> running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
>> ${PYTHON:-python} -m subunit.run discover -t ./ 
>> ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpppXiTc
>> running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
>> ${PYTHON:-python} -m subunit.run discover -t ./ 
>> ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpPhJZDc
>> 
>> Thanks!
>> 
>> Shixiong
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> I think there are two potential problems here. Either a test is
> deadlocking due to something it has done or
> neutron.openstack.common.lockutils is deadlocking. In either case
> OS_TEST_TIMEOUT is not set in .testr.conf so the test suite will not
> timeout individual tests if necessary. I would start by setting that
> in the .testr.conf next to OS_STDOUT_CAPTURE and you probably want a
> value of like 300 (that is seconds).
> 
> The other thing you can do to debug this is grab the subunit log file
> out of .testrepository. While tests are running it will have a tmp
> random generated name. After tests have run it will be moved to a file
> named after the most recent test run eg 1 for the first run. The
> subunit log should offer clues to what was running at the time of
> deadlocking.
> 
> Clark
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to