[ 
https://issues.apache.org/jira/browse/AIRFLOW-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Standish updated AIRFLOW-4056:
-------------------------------------
    Description: 
The conf parameter {{dag_dir_list_interval}} seems to have no effect on dag 
directory scanning.

It seems to happen every 2 seconds, no matter what.  The default is supposed to 
be 5 minutes.

As  a result I see a scheduler output like this:
{code:java}
[2019-03-09 17:06:24,579] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:26,587] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:28,590] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:30,597] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:32,603] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:34,611] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:35,195] {sqlalchemy.py:79} WARNING - DB connection 
invalidated. Reconnecting...
[2019-03-09 17:06:36,615] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:38,623] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:40,631] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:42,637] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:44,644] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:46,205] {sqlalchemy.py:79} WARNING - DB connection 
invalidated. Reconnecting...
[2019-03-09 17:06:46,651] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:48,658] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:50,666] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:52,670] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:54,680] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:56,687] {jobs.py:1559} INFO - Harvesting DAG parsing 
results{code}
And no more is there the periodic printing of dag stats, like there was in 
1.10.1.

 I can confirm that this is happening by adding this to something in dag folder:
{code:python}
with open(Path('~/temp/test.log').expanduser(), 'at') as f:
    f.write(f"{datetime.now()}: i am imported\n")
{code}

Here is some scheduler output with debug log level:
{code}
____________       _____________
 ____    |__( )_________  __/__  /________      __
____  /| |_  /__  ___/_  /_ __  /_  __ \_ | /| / /
___  ___ |  / _  /   _  __/ _  / / /_/ /_ |/ |/ /
 _/_/  |_/_/  /_/    /_/    /_/  \____/____/|__/

[2019-03-09 17:20:59,042] {jobs.py:1477} INFO - Starting the scheduler
[2019-03-09 17:20:59,042] {jobs.py:1485} INFO - Running execute loop for -1 
seconds
[2019-03-09 17:20:59,043] {jobs.py:1486} INFO - Processing each file at most -1 
times
[2019-03-09 17:20:59,043] {jobs.py:1489} INFO - Searching for files in 
/Users/dstandish/code/python_tfgetl/tfgetl/dags
[2019-03-09 17:20:59,046] {jobs.py:1491} INFO - There are 11 files in 
/Users/dstandish/code/python_tfgetl/tfgetl/dags
[2019-03-09 17:20:59,105] {jobs.py:1534} INFO - Resetting orphaned tasks for 
active dag runs
[2019-03-09 17:20:59,121] {dag_processing.py:453} INFO - Launched 
DagFileProcessorManager with pid: 57333
[2019-03-09 17:20:59,122] {jobs.py:1548} DEBUG - Starting Loop...
[2019-03-09 17:20:59,122] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:20:59,123] {jobs.py:1595} DEBUG - Heartbeating the executor
[2019-03-09 17:20:59,123] {base_executor.py:118} DEBUG - 0 running task 
instances
[2019-03-09 17:20:59,123] {base_executor.py:119} DEBUG - 0 in queue
[2019-03-09 17:20:59,123] {base_executor.py:120} DEBUG - 32 open slots
[2019-03-09 17:20:59,124] {base_executor.py:149} DEBUG - Calling the <class 
'airflow.executors.local_executor.LocalExecutor'> sync method
[2019-03-09 17:20:59,128] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.01 
seconds
[2019-03-09 17:20:59,129] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
[2019-03-09 17:20:59,130] {settings.py:51} INFO - Configured default timezone 
<Timezone [UTC]>
[2019-03-09 17:20:59,133] {logging_config.py:63} DEBUG - Unable to load custom 
logging, using default config instead
[2019-03-09 17:20:59,143] {settings.py:146} DEBUG - Setting up DB connection 
pool (PID 57333)
[2019-03-09 17:20:59,144] {settings.py:174} INFO - settings.configure_orm(): 
Using pool settings. pool_size=5, pool_recycle=1800, pid=57333
[2019-03-09 17:20:59,256] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57334)
[2019-03-09 17:20:59,267] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57337)
[2019-03-09 17:20:59,610] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57336)
[2019-03-09 17:20:59,752] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57335)
[2019-03-09 17:21:00,130] {jobs.py:1627} DEBUG - Sleeping for 0.99 seconds to 
prevent excessive logging
[2019-03-09 17:21:00,156] {jobs.py:489} DEBUG - Waiting for 
<Process(DagFileProcessor0-Process, stopped)>
[2019-03-09 17:21:00,161] {jobs.py:489} DEBUG - Waiting for 
<Process(DagFileProcessor1-Process, stopped)>
[2019-03-09 17:21:00,163] {jobs.py:489} DEBUG - Waiting for 
<Process(DagFileProcessor2-Process, stopped)>
[2019-03-09 17:21:00,164] {jobs.py:489} DEBUG - Waiting for 
<Process(DagFileProcessor3-Process, stopped)>
[2019-03-09 17:21:00,252] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57338)
[2019-03-09 17:21:00,263] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57340)
[2019-03-09 17:21:00,692] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57339)
[2019-03-09 17:21:00,759] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57341)
[2019-03-09 17:21:01,125] {jobs.py:1548} DEBUG - Starting Loop...
[2019-03-09 17:21:01,125] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:21:01,151] {jobs.py:1098} DEBUG - No tasks to consider for 
execution.
[2019-03-09 17:21:01,151] {jobs.py:1595} DEBUG - Heartbeating the executor
[2019-03-09 17:21:01,151] {base_executor.py:118} DEBUG - 0 running task 
instances
[2019-03-09 17:21:01,151] {base_executor.py:119} DEBUG - 0 in queue
[2019-03-09 17:21:01,151] {base_executor.py:120} DEBUG - 32 open slots
[2019-03-09 17:21:01,151] {base_executor.py:149} DEBUG - Calling the <class 
'airflow.executors.local_executor.LocalExecutor'> sync method
[2019-03-09 17:21:01,152] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.03 
seconds
[2019-03-09 17:21:01,152] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
[2019-03-09 17:21:01,159] {jobs.py:489} DEBUG - Waiting for 
<Process(DagFileProcessor4-Process, stopped)>
[2019-03-09 17:21:01,161] {jobs.py:489} DEBUG - Waiting for 
<Process(DagFileProcessor5-Process, stopped)>
[2019-03-09 17:21:01,163] {jobs.py:489} DEBUG - Waiting for 
<Process(DagFileProcessor6-Process, stopped)>
[2019-03-09 17:21:01,164] {jobs.py:489} DEBUG - Waiting for 
<Process(DagFileProcessor7-Process, stopped)>
[2019-03-09 17:21:01,619] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57343)
[2019-03-09 17:21:01,702] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57342)
[2019-03-09 17:21:01,731] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57344)
[2019-03-09 17:21:02,155] {jobs.py:1627} DEBUG - Sleeping for 0.97 seconds to 
prevent excessive logging
[2019-03-09 17:21:02,163] {jobs.py:489} DEBUG - Waiting for 
<Process(DagFileProcessor8-Process, stopped)>
[2019-03-09 17:21:02,166] {jobs.py:489} DEBUG - Waiting for 
<Process(DagFileProcessor9-Process, stopped)>
[2019-03-09 17:21:02,168] {jobs.py:489} DEBUG - Waiting for 
<Process(DagFileProcessor10-Process, stopped)>
[2019-03-09 17:21:03,132] {jobs.py:1548} DEBUG - Starting Loop...
[2019-03-09 17:21:03,133] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:21:03,146] {jobs.py:1098} DEBUG - No tasks to consider for 
execution.
[2019-03-09 17:21:03,146] {jobs.py:1595} DEBUG - Heartbeating the executor
[2019-03-09 17:21:03,146] {base_executor.py:118} DEBUG - 0 running task 
instances
[2019-03-09 17:21:03,146] {base_executor.py:119} DEBUG - 0 in queue
[2019-03-09 17:21:03,147] {base_executor.py:120} DEBUG - 32 open slots
[2019-03-09 17:21:03,147] {base_executor.py:149} DEBUG - Calling the <class 
'airflow.executors.local_executor.LocalExecutor'> sync method
[2019-03-09 17:21:03,147] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.01 
seconds
[2019-03-09 17:21:03,147] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
[2019-03-09 17:21:04,149] {jobs.py:1627} DEBUG - Sleeping for 0.99 seconds to 
prevent excessive logging
[2019-03-09 17:21:05,138] {jobs.py:1548} DEBUG - Starting Loop...
[2019-03-09 17:21:05,138] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:21:05,139] {jobs.py:1595} DEBUG - Heartbeating the executor
[2019-03-09 17:21:05,139] {base_executor.py:118} DEBUG - 0 running task 
instances
[2019-03-09 17:21:05,139] {base_executor.py:119} DEBUG - 0 in queue
[2019-03-09 17:21:05,139] {base_executor.py:120} DEBUG - 32 open slots
[2019-03-09 17:21:05,139] {base_executor.py:149} DEBUG - Calling the <class 
'airflow.executors.local_executor.LocalExecutor'> sync method
[2019-03-09 17:21:05,140] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
seconds
[2019-03-09 17:21:05,140] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
[2019-03-09 17:21:06,145] {jobs.py:1627} DEBUG - Sleeping for 1.00 seconds to 
prevent excessive logging
[2019-03-09 17:21:07,148] {jobs.py:1548} DEBUG - Starting Loop...
[2019-03-09 17:21:07,148] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:21:07,149] {jobs.py:1595} DEBUG - Heartbeating the executor
[2019-03-09 17:21:07,149] {base_executor.py:118} DEBUG - 0 running task 
instances
[2019-03-09 17:21:07,149] {base_executor.py:119} DEBUG - 0 in queue
[2019-03-09 17:21:07,149] {base_executor.py:120} DEBUG - 32 open slots
[2019-03-09 17:21:07,149] {base_executor.py:149} DEBUG - Calling the <class 
'airflow.executors.local_executor.LocalExecutor'> sync method
[2019-03-09 17:21:07,149] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
seconds
[2019-03-09 17:21:07,149] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
[2019-03-09 17:21:08,153] {jobs.py:1627} DEBUG - Sleeping for 1.00 seconds to 
prevent excessive logging
[2019-03-09 17:21:09,155] {jobs.py:1548} DEBUG - Starting Loop...
[2019-03-09 17:21:09,155] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:21:09,155] {jobs.py:1595} DEBUG - Heartbeating the executor
[2019-03-09 17:21:09,156] {base_executor.py:118} DEBUG - 0 running task 
instances
[2019-03-09 17:21:09,156] {base_executor.py:119} DEBUG - 0 in queue
[2019-03-09 17:21:09,156] {base_executor.py:120} DEBUG - 32 open slots
[2019-03-09 17:21:09,156] {base_executor.py:149} DEBUG - Calling the <class 
'airflow.executors.local_executor.LocalExecutor'> sync method
[2019-03-09 17:21:09,156] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
seconds
[2019-03-09 17:21:09,156] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
[2019-03-09 17:21:10,158] {jobs.py:1627} DEBUG - Sleeping for 1.00 seconds to 
prevent excessive logging
[2019-03-09 17:21:11,162] {jobs.py:1548} DEBUG - Starting Loop...
[2019-03-09 17:21:11,162] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:21:11,162] {jobs.py:1595} DEBUG - Heartbeating the executor
[2019-03-09 17:21:11,163] {base_executor.py:118} DEBUG - 0 running task 
instances
[2019-03-09 17:21:11,163] {base_executor.py:119} DEBUG - 0 in queue
[2019-03-09 17:21:11,163] {base_executor.py:120} DEBUG - 32 open slots
[2019-03-09 17:21:11,163] {base_executor.py:149} DEBUG - Calling the <class 
'airflow.executors.local_executor.LocalExecutor'> sync method
[2019-03-09 17:21:11,163] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
seconds
[2019-03-09 17:21:11,163] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
[2019-03-09 17:21:12,165] {jobs.py:1627} DEBUG - Sleeping for 1.00 seconds to 
prevent excessive logging
[2019-03-09 17:21:13,164] {jobs.py:1548} DEBUG - Starting Loop...
[2019-03-09 17:21:13,164] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:21:13,165] {jobs.py:1595} DEBUG - Heartbeating the executor
[2019-03-09 17:21:13,165] {base_executor.py:118} DEBUG - 0 running task 
instances
[2019-03-09 17:21:13,165] {base_executor.py:119} DEBUG - 0 in queue
[2019-03-09 17:21:13,165] {base_executor.py:120} DEBUG - 32 open slots
[2019-03-09 17:21:13,165] {base_executor.py:149} DEBUG - Calling the <class 
'airflow.executors.local_executor.LocalExecutor'> sync method
[2019-03-09 17:21:13,165] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
seconds
[2019-03-09 17:21:13,166] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
[2019-03-09 17:21:14,171] {jobs.py:1627} DEBUG - Sleeping for 1.00 seconds to 
prevent excessive logging
[2019-03-09 17:21:15,175] {jobs.py:1548} DEBUG - Starting Loop...
[2019-03-09 17:21:15,175] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:21:15,176] {jobs.py:1595} DEBUG - Heartbeating the executor
[2019-03-09 17:21:15,176] {base_executor.py:118} DEBUG - 0 running task 
instances
[2019-03-09 17:21:15,176] {base_executor.py:119} DEBUG - 0 in queue
[2019-03-09 17:21:15,176] {base_executor.py:120} DEBUG - 32 open slots
[2019-03-09 17:21:15,176] {base_executor.py:149} DEBUG - Calling the <class 
'airflow.executors.local_executor.LocalExecutor'> sync method
[2019-03-09 17:21:15,177] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
seconds
[2019-03-09 17:21:15,177] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
^C[2019-03-09 17:21:15,588] {dag_processing.py:587} INFO - Terminating manager 
process: 57333
[2019-03-09 17:21:15,589] {dag_processing.py:592} INFO - Waiting up to 5s for 
manager process to exit...
[2019-03-09 17:21:15,590] {jobs.py:1513} INFO - Exited execute loop
[2019-03-09 17:21:15,597] {cli_action_loggers.py:81} DEBUG - Calling callbacks: 
[]
[2019-03-09 17:21:15,599] {settings.py:201} DEBUG - Disposing DB connection 
pool (PID 57297)
{code}


  was:
The conf parameter {{dag_dir_list_interval}} seems to have no effect on dag 
directory scanning.

It seems to happen every 2 seconds, no matter what.  The default is supposed to 
be 5 minutes.

As  a result I see a scheduler output like this:
{code:java}
[2019-03-09 17:06:24,579] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:26,587] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:28,590] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:30,597] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:32,603] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:34,611] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:35,195] {sqlalchemy.py:79} WARNING - DB connection 
invalidated. Reconnecting...
[2019-03-09 17:06:36,615] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:38,623] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:40,631] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:42,637] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:44,644] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:46,205] {sqlalchemy.py:79} WARNING - DB connection 
invalidated. Reconnecting...
[2019-03-09 17:06:46,651] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:48,658] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:50,666] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:52,670] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:54,680] {jobs.py:1559} INFO - Harvesting DAG parsing results
[2019-03-09 17:06:56,687] {jobs.py:1559} INFO - Harvesting DAG parsing 
results{code}
And no more is there the periodic printing of dag stats, like there was in 
1.10.1.

 I can confirm that this is happening by adding this to something in dag folder:
{code:java}
with open(Path('~/temp/test.log').expanduser(), 'at') as f:
    f.write(f"{datetime.now()}: i am imported\n")
{code}


> Dag file processing does not respect dag_dir_list_interval
> ----------------------------------------------------------
>
>                 Key: AIRFLOW-4056
>                 URL: https://issues.apache.org/jira/browse/AIRFLOW-4056
>             Project: Apache Airflow
>          Issue Type: Bug
>          Components: scheduler
>    Affects Versions: 1.10.2
>         Environment: I have confirmed this issue on mac and centos 
> environments, using mysql backend.
>            Reporter: Daniel Standish
>            Priority: Major
>
> The conf parameter {{dag_dir_list_interval}} seems to have no effect on dag 
> directory scanning.
> It seems to happen every 2 seconds, no matter what.  The default is supposed 
> to be 5 minutes.
> As  a result I see a scheduler output like this:
> {code:java}
> [2019-03-09 17:06:24,579] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:26,587] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:28,590] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:30,597] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:32,603] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:34,611] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:35,195] {sqlalchemy.py:79} WARNING - DB connection 
> invalidated. Reconnecting...
> [2019-03-09 17:06:36,615] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:38,623] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:40,631] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:42,637] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:44,644] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:46,205] {sqlalchemy.py:79} WARNING - DB connection 
> invalidated. Reconnecting...
> [2019-03-09 17:06:46,651] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:48,658] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:50,666] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:52,670] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:54,680] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:06:56,687] {jobs.py:1559} INFO - Harvesting DAG parsing 
> results{code}
> And no more is there the periodic printing of dag stats, like there was in 
> 1.10.1.
>  I can confirm that this is happening by adding this to something in dag 
> folder:
> {code:python}
> with open(Path('~/temp/test.log').expanduser(), 'at') as f:
>     f.write(f"{datetime.now()}: i am imported\n")
> {code}
> Here is some scheduler output with debug log level:
> {code}
> ____________       _____________
>  ____    |__( )_________  __/__  /________      __
> ____  /| |_  /__  ___/_  /_ __  /_  __ \_ | /| / /
> ___  ___ |  / _  /   _  __/ _  / / /_/ /_ |/ |/ /
>  _/_/  |_/_/  /_/    /_/    /_/  \____/____/|__/
> [2019-03-09 17:20:59,042] {jobs.py:1477} INFO - Starting the scheduler
> [2019-03-09 17:20:59,042] {jobs.py:1485} INFO - Running execute loop for -1 
> seconds
> [2019-03-09 17:20:59,043] {jobs.py:1486} INFO - Processing each file at most 
> -1 times
> [2019-03-09 17:20:59,043] {jobs.py:1489} INFO - Searching for files in 
> /Users/dstandish/code/python_tfgetl/tfgetl/dags
> [2019-03-09 17:20:59,046] {jobs.py:1491} INFO - There are 11 files in 
> /Users/dstandish/code/python_tfgetl/tfgetl/dags
> [2019-03-09 17:20:59,105] {jobs.py:1534} INFO - Resetting orphaned tasks for 
> active dag runs
> [2019-03-09 17:20:59,121] {dag_processing.py:453} INFO - Launched 
> DagFileProcessorManager with pid: 57333
> [2019-03-09 17:20:59,122] {jobs.py:1548} DEBUG - Starting Loop...
> [2019-03-09 17:20:59,122] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:20:59,123] {jobs.py:1595} DEBUG - Heartbeating the executor
> [2019-03-09 17:20:59,123] {base_executor.py:118} DEBUG - 0 running task 
> instances
> [2019-03-09 17:20:59,123] {base_executor.py:119} DEBUG - 0 in queue
> [2019-03-09 17:20:59,123] {base_executor.py:120} DEBUG - 32 open slots
> [2019-03-09 17:20:59,124] {base_executor.py:149} DEBUG - Calling the <class 
> 'airflow.executors.local_executor.LocalExecutor'> sync method
> [2019-03-09 17:20:59,128] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.01 
> seconds
> [2019-03-09 17:20:59,129] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
> [2019-03-09 17:20:59,130] {settings.py:51} INFO - Configured default timezone 
> <Timezone [UTC]>
> [2019-03-09 17:20:59,133] {logging_config.py:63} DEBUG - Unable to load 
> custom logging, using default config instead
> [2019-03-09 17:20:59,143] {settings.py:146} DEBUG - Setting up DB connection 
> pool (PID 57333)
> [2019-03-09 17:20:59,144] {settings.py:174} INFO - settings.configure_orm(): 
> Using pool settings. pool_size=5, pool_recycle=1800, pid=57333
> [2019-03-09 17:20:59,256] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57334)
> [2019-03-09 17:20:59,267] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57337)
> [2019-03-09 17:20:59,610] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57336)
> [2019-03-09 17:20:59,752] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57335)
> [2019-03-09 17:21:00,130] {jobs.py:1627} DEBUG - Sleeping for 0.99 seconds to 
> prevent excessive logging
> [2019-03-09 17:21:00,156] {jobs.py:489} DEBUG - Waiting for 
> <Process(DagFileProcessor0-Process, stopped)>
> [2019-03-09 17:21:00,161] {jobs.py:489} DEBUG - Waiting for 
> <Process(DagFileProcessor1-Process, stopped)>
> [2019-03-09 17:21:00,163] {jobs.py:489} DEBUG - Waiting for 
> <Process(DagFileProcessor2-Process, stopped)>
> [2019-03-09 17:21:00,164] {jobs.py:489} DEBUG - Waiting for 
> <Process(DagFileProcessor3-Process, stopped)>
> [2019-03-09 17:21:00,252] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57338)
> [2019-03-09 17:21:00,263] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57340)
> [2019-03-09 17:21:00,692] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57339)
> [2019-03-09 17:21:00,759] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57341)
> [2019-03-09 17:21:01,125] {jobs.py:1548} DEBUG - Starting Loop...
> [2019-03-09 17:21:01,125] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:21:01,151] {jobs.py:1098} DEBUG - No tasks to consider for 
> execution.
> [2019-03-09 17:21:01,151] {jobs.py:1595} DEBUG - Heartbeating the executor
> [2019-03-09 17:21:01,151] {base_executor.py:118} DEBUG - 0 running task 
> instances
> [2019-03-09 17:21:01,151] {base_executor.py:119} DEBUG - 0 in queue
> [2019-03-09 17:21:01,151] {base_executor.py:120} DEBUG - 32 open slots
> [2019-03-09 17:21:01,151] {base_executor.py:149} DEBUG - Calling the <class 
> 'airflow.executors.local_executor.LocalExecutor'> sync method
> [2019-03-09 17:21:01,152] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.03 
> seconds
> [2019-03-09 17:21:01,152] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
> [2019-03-09 17:21:01,159] {jobs.py:489} DEBUG - Waiting for 
> <Process(DagFileProcessor4-Process, stopped)>
> [2019-03-09 17:21:01,161] {jobs.py:489} DEBUG - Waiting for 
> <Process(DagFileProcessor5-Process, stopped)>
> [2019-03-09 17:21:01,163] {jobs.py:489} DEBUG - Waiting for 
> <Process(DagFileProcessor6-Process, stopped)>
> [2019-03-09 17:21:01,164] {jobs.py:489} DEBUG - Waiting for 
> <Process(DagFileProcessor7-Process, stopped)>
> [2019-03-09 17:21:01,619] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57343)
> [2019-03-09 17:21:01,702] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57342)
> [2019-03-09 17:21:01,731] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57344)
> [2019-03-09 17:21:02,155] {jobs.py:1627} DEBUG - Sleeping for 0.97 seconds to 
> prevent excessive logging
> [2019-03-09 17:21:02,163] {jobs.py:489} DEBUG - Waiting for 
> <Process(DagFileProcessor8-Process, stopped)>
> [2019-03-09 17:21:02,166] {jobs.py:489} DEBUG - Waiting for 
> <Process(DagFileProcessor9-Process, stopped)>
> [2019-03-09 17:21:02,168] {jobs.py:489} DEBUG - Waiting for 
> <Process(DagFileProcessor10-Process, stopped)>
> [2019-03-09 17:21:03,132] {jobs.py:1548} DEBUG - Starting Loop...
> [2019-03-09 17:21:03,133] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:21:03,146] {jobs.py:1098} DEBUG - No tasks to consider for 
> execution.
> [2019-03-09 17:21:03,146] {jobs.py:1595} DEBUG - Heartbeating the executor
> [2019-03-09 17:21:03,146] {base_executor.py:118} DEBUG - 0 running task 
> instances
> [2019-03-09 17:21:03,146] {base_executor.py:119} DEBUG - 0 in queue
> [2019-03-09 17:21:03,147] {base_executor.py:120} DEBUG - 32 open slots
> [2019-03-09 17:21:03,147] {base_executor.py:149} DEBUG - Calling the <class 
> 'airflow.executors.local_executor.LocalExecutor'> sync method
> [2019-03-09 17:21:03,147] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.01 
> seconds
> [2019-03-09 17:21:03,147] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
> [2019-03-09 17:21:04,149] {jobs.py:1627} DEBUG - Sleeping for 0.99 seconds to 
> prevent excessive logging
> [2019-03-09 17:21:05,138] {jobs.py:1548} DEBUG - Starting Loop...
> [2019-03-09 17:21:05,138] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:21:05,139] {jobs.py:1595} DEBUG - Heartbeating the executor
> [2019-03-09 17:21:05,139] {base_executor.py:118} DEBUG - 0 running task 
> instances
> [2019-03-09 17:21:05,139] {base_executor.py:119} DEBUG - 0 in queue
> [2019-03-09 17:21:05,139] {base_executor.py:120} DEBUG - 32 open slots
> [2019-03-09 17:21:05,139] {base_executor.py:149} DEBUG - Calling the <class 
> 'airflow.executors.local_executor.LocalExecutor'> sync method
> [2019-03-09 17:21:05,140] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
> seconds
> [2019-03-09 17:21:05,140] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
> [2019-03-09 17:21:06,145] {jobs.py:1627} DEBUG - Sleeping for 1.00 seconds to 
> prevent excessive logging
> [2019-03-09 17:21:07,148] {jobs.py:1548} DEBUG - Starting Loop...
> [2019-03-09 17:21:07,148] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:21:07,149] {jobs.py:1595} DEBUG - Heartbeating the executor
> [2019-03-09 17:21:07,149] {base_executor.py:118} DEBUG - 0 running task 
> instances
> [2019-03-09 17:21:07,149] {base_executor.py:119} DEBUG - 0 in queue
> [2019-03-09 17:21:07,149] {base_executor.py:120} DEBUG - 32 open slots
> [2019-03-09 17:21:07,149] {base_executor.py:149} DEBUG - Calling the <class 
> 'airflow.executors.local_executor.LocalExecutor'> sync method
> [2019-03-09 17:21:07,149] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
> seconds
> [2019-03-09 17:21:07,149] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
> [2019-03-09 17:21:08,153] {jobs.py:1627} DEBUG - Sleeping for 1.00 seconds to 
> prevent excessive logging
> [2019-03-09 17:21:09,155] {jobs.py:1548} DEBUG - Starting Loop...
> [2019-03-09 17:21:09,155] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:21:09,155] {jobs.py:1595} DEBUG - Heartbeating the executor
> [2019-03-09 17:21:09,156] {base_executor.py:118} DEBUG - 0 running task 
> instances
> [2019-03-09 17:21:09,156] {base_executor.py:119} DEBUG - 0 in queue
> [2019-03-09 17:21:09,156] {base_executor.py:120} DEBUG - 32 open slots
> [2019-03-09 17:21:09,156] {base_executor.py:149} DEBUG - Calling the <class 
> 'airflow.executors.local_executor.LocalExecutor'> sync method
> [2019-03-09 17:21:09,156] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
> seconds
> [2019-03-09 17:21:09,156] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
> [2019-03-09 17:21:10,158] {jobs.py:1627} DEBUG - Sleeping for 1.00 seconds to 
> prevent excessive logging
> [2019-03-09 17:21:11,162] {jobs.py:1548} DEBUG - Starting Loop...
> [2019-03-09 17:21:11,162] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:21:11,162] {jobs.py:1595} DEBUG - Heartbeating the executor
> [2019-03-09 17:21:11,163] {base_executor.py:118} DEBUG - 0 running task 
> instances
> [2019-03-09 17:21:11,163] {base_executor.py:119} DEBUG - 0 in queue
> [2019-03-09 17:21:11,163] {base_executor.py:120} DEBUG - 32 open slots
> [2019-03-09 17:21:11,163] {base_executor.py:149} DEBUG - Calling the <class 
> 'airflow.executors.local_executor.LocalExecutor'> sync method
> [2019-03-09 17:21:11,163] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
> seconds
> [2019-03-09 17:21:11,163] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
> [2019-03-09 17:21:12,165] {jobs.py:1627} DEBUG - Sleeping for 1.00 seconds to 
> prevent excessive logging
> [2019-03-09 17:21:13,164] {jobs.py:1548} DEBUG - Starting Loop...
> [2019-03-09 17:21:13,164] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:21:13,165] {jobs.py:1595} DEBUG - Heartbeating the executor
> [2019-03-09 17:21:13,165] {base_executor.py:118} DEBUG - 0 running task 
> instances
> [2019-03-09 17:21:13,165] {base_executor.py:119} DEBUG - 0 in queue
> [2019-03-09 17:21:13,165] {base_executor.py:120} DEBUG - 32 open slots
> [2019-03-09 17:21:13,165] {base_executor.py:149} DEBUG - Calling the <class 
> 'airflow.executors.local_executor.LocalExecutor'> sync method
> [2019-03-09 17:21:13,165] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
> seconds
> [2019-03-09 17:21:13,166] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
> [2019-03-09 17:21:14,171] {jobs.py:1627} DEBUG - Sleeping for 1.00 seconds to 
> prevent excessive logging
> [2019-03-09 17:21:15,175] {jobs.py:1548} DEBUG - Starting Loop...
> [2019-03-09 17:21:15,175] {jobs.py:1559} INFO - Harvesting DAG parsing results
> [2019-03-09 17:21:15,176] {jobs.py:1595} DEBUG - Heartbeating the executor
> [2019-03-09 17:21:15,176] {base_executor.py:118} DEBUG - 0 running task 
> instances
> [2019-03-09 17:21:15,176] {base_executor.py:119} DEBUG - 0 in queue
> [2019-03-09 17:21:15,176] {base_executor.py:120} DEBUG - 32 open slots
> [2019-03-09 17:21:15,176] {base_executor.py:149} DEBUG - Calling the <class 
> 'airflow.executors.local_executor.LocalExecutor'> sync method
> [2019-03-09 17:21:15,177] {jobs.py:1613} DEBUG - Ran scheduling loop in 0.00 
> seconds
> [2019-03-09 17:21:15,177] {jobs.py:1614} DEBUG - Sleeping for 1.00 seconds
> ^C[2019-03-09 17:21:15,588] {dag_processing.py:587} INFO - Terminating 
> manager process: 57333
> [2019-03-09 17:21:15,589] {dag_processing.py:592} INFO - Waiting up to 5s for 
> manager process to exit...
> [2019-03-09 17:21:15,590] {jobs.py:1513} INFO - Exited execute loop
> [2019-03-09 17:21:15,597] {cli_action_loggers.py:81} DEBUG - Calling 
> callbacks: []
> [2019-03-09 17:21:15,599] {settings.py:201} DEBUG - Disposing DB connection 
> pool (PID 57297)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to