Hi,
we are using airflow with Celery executor.

Environment:

·        *OS*: Red Hat Enterprise Linux Server 7.2 (Maipo)

·        *Python*: 2.7.5

·        *Airflow*: 1.7.1.3

'airflow worker' command throws an exception when running it.

Despite the exception, the workers run the tasks from the queues.

Is it normal? should I open a bug in airflow Jira?

The exception:

  [2016-07-03 11:22:40,253] {__init__.py:36} INFO - Using executor
CeleryExecutor

Starting flask

Traceback (most recent call last):

  File "/usr/bin/airflow", line 15, in <module>

    args.func(args)

  File "/usr/lib/python2.7/site-packages/airflow/bin/cli.py", line 474, in
serve_logs

    host='0.0.0.0', port=WORKER_LOG_SERVER_PORT)

  File "/usr/lib/python2.7/site-packages/flask/app.py", line 772, in run

    run_simple(host, port, self, **options)

  File "/usr/lib/python2.7/site-packages/werkzeug/serving.py", line 694, in
run_simple

    inner()

  File "/usr/lib/python2.7/site-packages/werkzeug/serving.py", line 656, in
inner

    fd=fd)

  File "/usr/lib/python2.7/site-packages/werkzeug/serving.py", line 550, in
make_server

    passthrough_errors, ssl_context, fd=fd)

  File "/usr/lib/python2.7/site-packages/werkzeug/serving.py", line 464, in
__init__

    HTTPServer.__init__(self, (host, int(port)), handler)

  File "/usr/lib64/python2.7/SocketServer.py", line 419, in __init__

    self.server_bind()

  File "/usr/lib64/python2.7/BaseHTTPServer.py", line 108, in server_bind

    SocketServer.TCPServer.server_bind(self)

  File "/usr/lib64/python2.7/SocketServer.py", line 430, in server_bind

    self.socket.bind(self.server_address)

  File "/usr/lib64/python2.7/socket.py", line 224, in meth

    return getattr(self._sock,name)(*args)

socket.error: [Errno 98] Address already in use

[2016-07-03 11:22:42,359: WARNING/MainProcess] celery@hadoop01.localdomain
ready.

Thanks

Reply via email to