Hi Ahmet,

Thank you. I have attached the requirements.txt that was generated from pip
freeze > requirments.txt on the datalab notebook.
The file does not include apache-beam package, only apache-airflow==1.9.0
Could you please let me know which package includes indexes.base

Best,
Eila


On Thu, Jun 21, 2018 at 1:55 PM, Ahmet Altay <al...@google.com> wrote:

> Hi Ella,
>
> It seems like, the package related to indexes.base is not installed in the
> workers. Could you try one of the methods in "Managing Python Pipeline
> Dependencies" [1], to stage that dependency?
>
> Ahmet
>
> [1] https://beam.apache.org/documentation/sdks/python-
> pipeline-dependencies/
>
> On Thu, Jun 21, 2018 at 9:40 AM, OrielResearch Eila Arich-Landkof <
> e...@orielresearch.org> wrote:
>
>> Hello all,
>>
>> Exploring that issue (Local runner - works great and Dataflow fails),
>> there might be a mismatch between the apache_beam version and the dataflow
>> version
>>
>> Please let me know what your thoughts are. if it is a version issue, what
>> updates should be executed? how do I cover the installation on the datalab
>> VM and the Google Cloud Platform.
>>
>> Running the following command / or a different command on the shell? on
>> datalab?
>>
>> I tried running this on the datalab and it didnt solve the issue (*see
>> below the full logs report*)
>>
>> pip install --upgrade apache_beam google-cloud-dataflow
>>
>> Please advice.
>>
>> Thanks,
>> Eila
>>
>>
>> *All logs:*
>>
>>
>> INFO:root:Staging the SDK tarball from PyPI to 
>> gs://archs4/staging/label-archs4-tsv.1529598693.453095/dataflow_python_sdk.tar
>> INFO:root:Executing command: ['/usr/local/envs/py2env/bin/python', '-m', 
>> 'pip', 'install', '--download', '/tmp/tmp5MM5wr', 
>> 'google-cloud-dataflow==2.0.0', '--no-binary', ':all:', '--no-deps']
>> INFO:root:file copy from /tmp/tmp5MM5wr/google-cloud-dataflow-2.0.0.tar.gz 
>> to 
>> gs://archs4/staging/label-archs4-tsv.1529598693.453095/dataflow_python_sdk.tar.
>> INFO:oauth2client.client:Attempting refresh to obtain initial access_token
>> INFO:oauth2client.client:Attempting refresh to obtain initial access_token
>> INFO:root:Create job: <Job
>>  createTime: u'2018-06-21T16:31:51.304121Z'
>>  currentStateTime: u'1970-01-01T00:00:00Z'
>>  id: u'2018-06-21_09_31_50-17545183031487377678'
>>  location: u'us-central1'
>>  name: u'label-archs4-tsv'
>>  projectId: u'orielresearch-188115'
>>  stageStates: []
>>  steps: []
>>  tempFiles: []
>>  type: TypeValueValuesEnum(JOB_TYPE_BATCH, 1)>
>> INFO:root:Created job with id: [2018-06-21_09_31_50-17545183031487377678]
>> INFO:root:To access the Dataflow monitoring console, please navigate to 
>> https://console.developers.google.com/project/orielresearch-188115/dataflow/job/2018-06-21_09_31_50-17545183031487377678
>> INFO:root:Job 2018-06-21_09_31_50-17545183031487377678 is in state 
>> JOB_STATE_PENDING
>> INFO:root:2018-06-21T16:31:50.476Z: JOB_MESSAGE_DETAILED: Autoscaling is 
>> enabled for job 2018-06-21_09_31_50-17545183031487377678. The number of 
>> workers will be between 1 and 1000.
>> INFO:root:2018-06-21T16:31:50.506Z: JOB_MESSAGE_DETAILED: Autoscaling was 
>> automatically enabled for job 2018-06-21_09_31_50-17545183031487377678.
>> INFO:root:2018-06-21T16:31:53.079Z: JOB_MESSAGE_DETAILED: Checking required 
>> Cloud APIs are enabled.
>> INFO:root:2018-06-21T16:31:53.385Z: JOB_MESSAGE_DETAILED: Checking 
>> permissions granted to controller Service Account.
>> INFO:root:2018-06-21T16:31:54.161Z: JOB_MESSAGE_BASIC: Worker configuration: 
>> n1-standard-1 in us-central1-b.
>> INFO:root:2018-06-21T16:31:54.910Z: JOB_MESSAGE_DETAILED: Expanding 
>> CoGroupByKey operations into optimizable parts.
>> INFO:root:2018-06-21T16:31:54.936Z: JOB_MESSAGE_DEBUG: Combiner lifting 
>> skipped for step writing to TSV files/Write/WriteImpl/GroupByKey: GroupByKey 
>> not followed by a combiner.
>> INFO:root:2018-06-21T16:31:54.968Z: JOB_MESSAGE_DETAILED: Expanding 
>> GroupByKey operations into optimizable parts.
>> INFO:root:2018-06-21T16:31:54.992Z: JOB_MESSAGE_DETAILED: Lifting 
>> ValueCombiningMappingFns into MergeBucketsMappingFns
>> INFO:root:2018-06-21T16:31:55.056Z: JOB_MESSAGE_DEBUG: Annotating graph with 
>> Autotuner information.
>> INFO:root:2018-06-21T16:31:55.168Z: JOB_MESSAGE_DETAILED: Fusing adjacent 
>> ParDo, Read, Write, and Flatten operations
>> INFO:root:2018-06-21T16:31:55.195Z: JOB_MESSAGE_DETAILED: Fusing consumer 
>> create more columns into Extract the rows from dataframe
>> INFO:root:2018-06-21T16:31:55.221Z: JOB_MESSAGE_DETAILED: Fusing consumer 
>> writing to TSV files/Write/WriteImpl/GroupByKey/Reify into writing to TSV 
>> files/Write/WriteImpl/WindowInto(WindowIntoFn)
>> INFO:root:2018-06-21T16:31:55.244Z: JOB_MESSAGE_DETAILED: Fusing consumer 
>> writing to TSV files/Write/WriteImpl/GroupByKey/Write into writing to TSV 
>> files/Write/WriteImpl/GroupByKey/Reify
>> INFO:root:2018-06-21T16:31:55.271Z: JOB_MESSAGE_DETAILED: Fusing consumer 
>> writing to TSV files/Write/WriteImpl/WriteBundles/Do into writing to TSV 
>> files/Write/WriteImpl/GroupByKey/GroupByWindow
>> INFO:root:2018-06-21T16:31:55.303Z: JOB_MESSAGE_DETAILED: Fusing consumer 
>> writing to TSV files/Write/WriteImpl/Map(<lambda at iobase.py:895>) into 
>> create more columns
>> INFO:root:2018-06-21T16:31:55.328Z: JOB_MESSAGE_DETAILED: Fusing consumer 
>> writing to TSV files/Write/WriteImpl/WindowInto(WindowIntoFn) into writing 
>> to TSV files/Write/WriteImpl/Map(<lambda at iobase.py:895>)
>> INFO:root:2018-06-21T16:31:55.341Z: JOB_MESSAGE_DETAILED: Fusing consumer 
>> writing to TSV files/Write/WriteImpl/GroupByKey/GroupByWindow into writing 
>> to TSV files/Write/WriteImpl/GroupByKey/Read
>> INFO:root:2018-06-21T16:31:55.365Z: JOB_MESSAGE_DETAILED: Fusing consumer 
>> writing to TSV files/Write/WriteImpl/InitializeWrite into writing to TSV 
>> files/Write/WriteImpl/DoOnce/Read
>> INFO:root:2018-06-21T16:31:55.396Z: JOB_MESSAGE_DEBUG: Workflow config is 
>> missing a default resource spec.
>> INFO:root:2018-06-21T16:31:55.432Z: JOB_MESSAGE_DEBUG: Adding StepResource 
>> setup and teardown to workflow graph.
>> INFO:root:2018-06-21T16:31:55.461Z: JOB_MESSAGE_DEBUG: Adding workflow start 
>> and stop steps.
>> INFO:root:2018-06-21T16:31:55.486Z: JOB_MESSAGE_DEBUG: Assigning stage ids.
>> INFO:root:2018-06-21T16:31:55.641Z: JOB_MESSAGE_DEBUG: Executing wait step 
>> start15
>> INFO:root:Job 2018-06-21_09_31_50-17545183031487377678 is in state 
>> JOB_STATE_RUNNING
>> INFO:root:2018-06-21T16:31:55.701Z: JOB_MESSAGE_BASIC: Executing operation 
>> writing to TSV files/Write/WriteImpl/DoOnce/Read+writing to TSV 
>> files/Write/WriteImpl/InitializeWrite
>> INFO:root:2018-06-21T16:31:55.727Z: JOB_MESSAGE_BASIC: Executing operation 
>> writing to TSV files/Write/WriteImpl/GroupByKey/Create
>> INFO:root:2018-06-21T16:31:55.739Z: JOB_MESSAGE_DEBUG: Starting worker pool 
>> setup.
>> INFO:root:2018-06-21T16:31:55.753Z: JOB_MESSAGE_BASIC: Starting 1 workers in 
>> us-central1-b...
>> INFO:root:2018-06-21T16:31:55.839Z: JOB_MESSAGE_DEBUG: Value "writing to TSV 
>> files/Write/WriteImpl/GroupByKey/Session" materialized.
>> INFO:root:2018-06-21T16:31:55.901Z: JOB_MESSAGE_BASIC: Executing operation 
>> Extract the rows from dataframe+create more columns+writing to TSV 
>> files/Write/WriteImpl/Map(<lambda at iobase.py:895>)+writing to TSV 
>> files/Write/WriteImpl/WindowInto(WindowIntoFn)+writing to TSV 
>> files/Write/WriteImpl/GroupByKey/Reify+writing to TSV 
>> files/Write/WriteImpl/GroupByKey/Write
>> INFO:root:2018-06-21T16:31:56.332Z: JOB_MESSAGE_BASIC: BigQuery export job 
>> "dataflow_job_576766793008965363" started. You can check its status with the 
>> bq tool: "bq show -j --project_id=orielresearch-188115 
>> dataflow_job_576766793008965363".
>> INFO:root:2018-06-21T16:32:03.683Z: JOB_MESSAGE_DETAILED: Autoscaling: 
>> Raised the number of workers to 0 based on the rate of progress in the 
>> currently running step(s).
>> INFO:root:2018-06-21T16:32:14.181Z: JOB_MESSAGE_DETAILED: Autoscaling: 
>> Raised the number of workers to 1 based on the rate of progress in the 
>> currently running step(s).
>> INFO:root:2018-06-21T16:32:26.827Z: JOB_MESSAGE_DETAILED: BigQuery export 
>> job progress: "dataflow_job_576766793008965363" observed total of 1 exported 
>> files thus far.
>> INFO:root:2018-06-21T16:32:26.850Z: JOB_MESSAGE_BASIC: BigQuery export job 
>> finished: "dataflow_job_576766793008965363"
>> INFO:root:2018-06-21T16:32:33.078Z: JOB_MESSAGE_DETAILED: Workers have 
>> started successfully.
>> INFO:root:2018-06-21T16:35:35.511Z: JOB_MESSAGE_ERROR: Traceback (most 
>> recent call last):
>>   File 
>> "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", 
>> line 581, in do_work
>>     work_executor.execute()
>>   File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", 
>> line 166, in execute
>>     op.start()
>>   File "dataflow_worker/operations.py", line 283, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:10680)
>>     def start(self):
>>   File "dataflow_worker/operations.py", line 284, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:10574)
>>     with self.scoped_start_state:
>>   File "dataflow_worker/operations.py", line 289, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:9775)
>>     pickler.loads(self.spec.serialized_fn))
>>   File 
>> "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", 
>> line 225, in loads
>>     return dill.loads(s)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 277, in 
>> loads
>>     return load(file)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in 
>> load
>>     obj = pik.load()
>>   File "/usr/lib/python2.7/pickle.py", line 858, in load
>>     dispatch[key](self)
>>   File "/usr/lib/python2.7/pickle.py", line 1090, in load_global
>>     klass = self.find_class(module, name)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 423, in 
>> find_class
>>     return StockUnpickler.find_class(self, module, name)
>>   File "/usr/lib/python2.7/pickle.py", line 1124, in find_class
>>     __import__(module)
>> ImportError: No module named indexes.base
>>
>> INFO:root:2018-06-21T16:35:38.897Z: JOB_MESSAGE_ERROR: Traceback (most 
>> recent call last):
>>   File 
>> "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", 
>> line 581, in do_work
>>     work_executor.execute()
>>   File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", 
>> line 166, in execute
>>     op.start()
>>   File "dataflow_worker/operations.py", line 283, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:10680)
>>     def start(self):
>>   File "dataflow_worker/operations.py", line 284, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:10574)
>>     with self.scoped_start_state:
>>   File "dataflow_worker/operations.py", line 289, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:9775)
>>     pickler.loads(self.spec.serialized_fn))
>>   File 
>> "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", 
>> line 225, in loads
>>     return dill.loads(s)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 277, in 
>> loads
>>     return load(file)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in 
>> load
>>     obj = pik.load()
>>   File "/usr/lib/python2.7/pickle.py", line 858, in load
>>     dispatch[key](self)
>>   File "/usr/lib/python2.7/pickle.py", line 1090, in load_global
>>     klass = self.find_class(module, name)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 423, in 
>> find_class
>>     return StockUnpickler.find_class(self, module, name)
>>   File "/usr/lib/python2.7/pickle.py", line 1124, in find_class
>>     __import__(module)
>> ImportError: No module named indexes.base
>>
>> INFO:root:2018-06-21T16:35:42.245Z: JOB_MESSAGE_ERROR: Traceback (most 
>> recent call last):
>>   File 
>> "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", 
>> line 581, in do_work
>>     work_executor.execute()
>>   File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", 
>> line 166, in execute
>>     op.start()
>>   File "dataflow_worker/operations.py", line 283, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:10680)
>>     def start(self):
>>   File "dataflow_worker/operations.py", line 284, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:10574)
>>     with self.scoped_start_state:
>>   File "dataflow_worker/operations.py", line 289, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:9775)
>>     pickler.loads(self.spec.serialized_fn))
>>   File 
>> "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", 
>> line 225, in loads
>>     return dill.loads(s)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 277, in 
>> loads
>>     return load(file)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in 
>> load
>>     obj = pik.load()
>>   File "/usr/lib/python2.7/pickle.py", line 858, in load
>>     dispatch[key](self)
>>   File "/usr/lib/python2.7/pickle.py", line 1090, in load_global
>>     klass = self.find_class(module, name)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 423, in 
>> find_class
>>     return StockUnpickler.find_class(self, module, name)
>>   File "/usr/lib/python2.7/pickle.py", line 1124, in find_class
>>     __import__(module)
>> ImportError: No module named indexes.base
>>
>> INFO:root:2018-06-21T16:35:45.619Z: JOB_MESSAGE_ERROR: Traceback (most 
>> recent call last):
>>   File 
>> "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", 
>> line 581, in do_work
>>     work_executor.execute()
>>   File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", 
>> line 166, in execute
>>     op.start()
>>   File "dataflow_worker/operations.py", line 283, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:10680)
>>     def start(self):
>>   File "dataflow_worker/operations.py", line 284, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:10574)
>>     with self.scoped_start_state:
>>   File "dataflow_worker/operations.py", line 289, in 
>> dataflow_worker.operations.DoOperation.start 
>> (dataflow_worker/operations.c:9775)
>>     pickler.loads(self.spec.serialized_fn))
>>   File 
>> "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", 
>> line 225, in loads
>>     return dill.loads(s)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 277, in 
>> loads
>>     return load(file)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in 
>> load
>>     obj = pik.load()
>>   File "/usr/lib/python2.7/pickle.py", line 858, in load
>>     dispatch[key](self)
>>   File "/usr/lib/python2.7/pickle.py", line 1090, in load_global
>>     klass = self.find_class(module, name)
>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 423, in 
>> find_class
>>     return StockUnpickler.find_class(self, module, name)
>>   File "/usr/lib/python2.7/pickle.py", line 1124, in find_class
>>     __import__(module)
>> ImportError: No module named indexes.base
>>
>> INFO:root:2018-06-21T16:35:45.668Z: JOB_MESSAGE_DEBUG: Executing failure 
>> step failure14
>> INFO:root:2018-06-21T16:35:45.695Z: JOB_MESSAGE_ERROR: Workflow failed. 
>> Causes: S04:Extract the rows from dataframe+create more columns+writing to 
>> TSV files/Write/WriteImpl/Map(<lambda at iobase.py:895>)+writing to TSV 
>> files/Write/WriteImpl/WindowInto(WindowIntoFn)+writing to TSV 
>> files/Write/WriteImpl/GroupByKey/Reify+writing to TSV 
>> files/Write/WriteImpl/GroupByKey/Write failed., A work item was attempted 4 
>> times without success. Each time the worker eventually lost contact with the 
>> service. The work item was attempted on:
>>   label-archs4-tsv-06210931-a4r1-harness-rlqz,
>>   label-archs4-tsv-06210931-a4r1-harness-rlqz,
>>   label-archs4-tsv-06210931-a4r1-harness-rlqz,
>>   label-archs4-tsv-06210931-a4r1-harness-rlqz
>> INFO:root:2018-06-21T16:35:45.799Z: JOB_MESSAGE_DETAILED: Cleaning up.
>> INFO:root:2018-06-21T16:35:46Z: JOB_MESSAGE_DEBUG: Starting worker pool 
>> teardown.
>> INFO:root:2018-06-21T16:35:46.027Z: JOB_MESSAGE_BASIC: Stopping worker 
>> pool...
>>
>>
>>
>> On Wed, Jun 20, 2018 at 5:02 PM, OrielResearch Eila Arich-Landkof <
>> e...@orielresearch.org> wrote:
>>
>>> Hello,
>>>
>>> I am running the following pipeline on the local runner with no issues.
>>>
>>> logging.info('Define the pipeline')
>>> p =  beam.Pipeline(options=options)
>>> samplePath = outputPath
>>> ExploreData = (p | "Extract the rows from dataframe" >> beam.io.Read(
>>> beam.io.BigQuerySource('archs4.Debug_annotation'))
>>>                  | "create more columns" >>
>>> beam.ParDo(CreateColForSampleFn(colListSubset,outputPath)))
>>> (ExploreData | 'writing to TSV files' >> beam.io.WriteToText('gs://arch
>>> s4/output/dataExploration.txt',file_name_suffix='.tsv',num_s
>>> hards=1,append_trailing_newlines=True,header=colListStrHeader))
>>>
>>>
>>> Running on Dataflow fires the below error. I don't have any idea where
>>> to look for the issue. The error is not pointing to my pipeline code but to
>>> apache beam modules.
>>> I will try debugging using elimination. Please let me know if you have
>>> any direction for me.
>>>
>>> Many thanks,
>>> Eila
>>>
>>>
>>> ======================================================
>>>
>>> DataflowRuntimeExceptionTraceback (most recent call 
>>> last)<ipython-input-151-1e5aeb8b7d9b> in <module>()----> 1 
>>> p.run().wait_until_finish()
>>> /usr/local/envs/py2env/lib/python2.7/site-packages/apache_beam/runners/dataflow/dataflow_runner.pyc
>>>  in wait_until_finish(self, duration)    776         raise 
>>> DataflowRuntimeException(    777             'Dataflow pipeline failed. 
>>> State: %s, Error:\n%s' %--> 778             (self.state, 
>>> getattr(self._runner, 'last_error_msg', None)), self)    779     return 
>>> self.state    780
>>> DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
>>> Traceback (most recent call last):
>>>   File 
>>> "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", 
>>> line 581, in do_work
>>>     work_executor.execute()
>>>   File 
>>> "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 
>>> 166, in execute
>>>     op.start()
>>>   File "dataflow_worker/operations.py", line 283, in 
>>> dataflow_worker.operations.DoOperation.start 
>>> (dataflow_worker/operations.c:10680)
>>>     def start(self):
>>>   File "dataflow_worker/operations.py", line 284, in 
>>> dataflow_worker.operations.DoOperation.start 
>>> (dataflow_worker/operations.c:10574)
>>>     with self.scoped_start_state:
>>>   File "dataflow_worker/operations.py", line 289, in 
>>> dataflow_worker.operations.DoOperation.start 
>>> (dataflow_worker/operations.c:9775)
>>>     pickler.loads(self.spec.serialized_fn))
>>>   File 
>>> "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", 
>>> line 225, in loads
>>>     return dill.loads(s)
>>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 277, in 
>>> loads
>>>     return load(file)
>>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 266, in 
>>> load
>>>     obj = pik.load()
>>>   File "/usr/lib/python2.7/pickle.py", line 858, in load
>>>     dispatch[key](self)
>>>   File "/usr/lib/python2.7/pickle.py", line 1090, in load_global
>>>     klass = self.find_class(module, name)
>>>   File "/usr/local/lib/python2.7/dist-packages/dill/dill.py", line 423, in 
>>> find_class
>>>     return StockUnpickler.find_class(self, module, name)
>>>   File "/usr/lib/python2.7/pickle.py", line 1124, in find_class
>>>     __import__(module)
>>> ImportError: No module named indexes.base
>>>
>>> ======================================================
>>>
>>> --
>>> Eila
>>> www.orielresearch.org
>>> https://www.meetu <https://www.meetup.com/Deep-Learning-In-Production/>
>>> p.co <https://www.meetup.com/Deep-Learning-In-Production/>m/Deep-Le
>>> arning-In-Production/
>>> <https://www.meetup.com/Deep-Learning-In-Production/>
>>>
>>>
>>>
>>
>>
>> --
>> Eila
>> www.orielresearch.org
>> https://www.meetu <https://www.meetup.com/Deep-Learning-In-Production/>
>> p.co <https://www.meetup.com/Deep-Learning-In-Production/>m/Deep-Le
>> arning-In-Production/
>> <https://www.meetup.com/Deep-Learning-In-Production/>
>>
>>
>>
>


-- 
Eila
www.orielresearch.org
https://www.meetu <https://www.meetup.com/Deep-Learning-In-Production/>p.co
<https://www.meetup.com/Deep-Learning-In-Production/>
m/Deep-Learning-In-Production/
<https://www.meetup.com/Deep-Learning-In-Production/>
absl-py==0.2.0
alembic==0.8.10
apache-airflow==1.9.0
asn1crypto==0.24.0
astor==0.6.2
avro==1.8.2
backports-abc==0.5
backports.functools-lru-cache==1.5
backports.shutil-get-terminal-size==1.0.0
backports.ssl-matplotlibch-hostname==3.5.0.1
backports.weakref==1.0.post1
beautifulsoup4==4.6.0
bleach==2.1.2
blinker==1.4
bokeh==0.12.15
brewer2mpl==1.4.1
bs4==0.0.1
cachetools==2.0.1
certifi==2018.4.16
cffi==1.11.5
chardet==3.0.4
click==6.7
cloudpickle==0.5.2
configparser==3.5.0
crcmod==1.7
croniter==0.3.20
cryptography==2.2.2
cycler==0.10.0
cytoolz==0.9.0.1
dask==0.17.1
datalab==1.1.2
decorator==4.3.0
dill==0.2.7.1
distributed==1.21.8
docutils==0.14
entrypoints==0.2.3
enum34==1.1.6
fastcache==1.0.2
Flask==0.11.1
Flask-Admin==1.4.1
Flask-Cache==0.13.1
Flask-Login==0.2.11
flask-swagger==0.2.13
Flask-WTF==0.14
funcsigs==1.0.0
functools32==3.2.3.post2
future==0.16.0
futures==3.2.0
gapic-google-cloud-datastore-v1==0.15.3
gapic-google-cloud-error-reporting-v1beta1==0.15.3
gapic-google-cloud-logging-v2==0.91.3
gast==0.2.0
ggplot==0.6.8
gitdb2==2.0.3
GitPython==2.1.9
google-api-core==0.1.4
google-api-python-client==1.6.2
google-apitools==0.5.10
google-auth==1.4.1
google-auth-httplib2==0.0.3
google-auth-oauthlib==0.1.0
google-cloud==0.32.0
google-cloud-bigquery==0.28.0
google-cloud-bigquery-datatransfer==0.1.1
google-cloud-bigtable==0.28.1
google-cloud-container==0.1.1
google-cloud-core==0.28.1
google-cloud-dataflow==2.0.0
google-cloud-datastore==1.4.0
google-cloud-dns==0.28.0
google-cloud-error-reporting==0.28.0
google-cloud-firestore==0.28.0
google-cloud-language==1.0.2
google-cloud-logging==1.4.0
google-cloud-monitoring==0.28.1
google-cloud-pubsub==0.30.1
google-cloud-resource-manager==0.28.1
google-cloud-runtimeconfig==0.28.1
google-cloud-spanner==0.29.0
google-cloud-speech==0.30.0
google-cloud-storage==1.6.0
google-cloud-trace==0.17.0
google-cloud-translate==1.3.1
google-cloud-videointelligence==1.0.1
google-cloud-vision==0.29.0
google-gax==0.15.16
google-resumable-media==0.3.1
googleapis-common-protos==1.5.3
googledatastore==7.0.1
grpc-google-iam-v1==0.11.4
grpcio==1.11.0
gunicorn==19.8.1
h5py==2.7.1
heapdict==1.0.0
html5lib==1.0.1
httplib2==0.11.3
idna==2.6
imageio==2.3.0
ipaddress==1.0.22
ipykernel==4.5.2
ipython==5.6.0
ipython-genutils==0.2.0
ipywidgets==6.0.0
itsdangerous==0.24
Jinja2==2.8
jsonschema==2.6.0
jupyter-client==5.2.3
jupyter-core==4.4.0
lime==0.1.1.23
locket==0.2.0
lockfile==0.12.2
lxml==3.8.0
Mako==1.0.7
Markdown==2.6.11
MarkupSafe==1.0
matplotlib==2.1.2
mistune==0.8.3
mltoolbox-datalab-classification-and-regression==1.0.1
mltoolbox-datalab-image-classification==0.2
mock==2.0.0
msgpack-python==0.5.6
nbconvert==5.3.1
nbformat==4.4.0
networkx==2.1
nltk==3.2.1
notebook==5.4.1
numpy==1.14.0
oauth2client==2.2.0
oauthlib==2.0.7
olefile==0.45.1
ordereddict==1.1
packaging==17.1
pandas==0.22.0
pandas-gbq==0.3.0
pandas-profiling==1.4.1
pandocfilters==1.4.2
partd==0.3.8
pathlib2==2.3.2
patsy==0.5.0
pbr==4.0.2
pexpect==4.5.0
pickleshare==0.7.4
Pillow==3.4.1
plotly==1.12.5
ply==3.8
prompt-toolkit==1.0.15
proto-google-cloud-datastore-v1==0.90.4
proto-google-cloud-error-reporting-v1beta1==0.15.3
proto-google-cloud-logging-v2==0.91.3
protobuf==3.5.2
psutil==4.3.0
ptyprocess==0.5.2
pyasn1==0.4.2
pyasn1-modules==0.2.1
pycparser==2.18
Pygments==2.1.3
PyJWT==1.6.1
pyOpenSSL==17.5.0
pyparsing==2.2.0
PySocks==1.6.8
python-daemon==2.1.2
python-dateutil==2.5.0
python-editor==1.0.3
python-nvd3==0.14.2
python-slugify==1.1.4
python-snappy==0.5.1
pytz==2016.7
PyWavelets==0.5.2
PyYAML==3.12
pyzmq==16.0.2
requests==2.18.4
requests-oauthlib==0.8.0
rsa==3.4.2
scandir==1.7
scikit-image==0.13.0
scikit-learn==0.19.1
scipy==1.0.0
seaborn==0.7.0
Send2Trash==1.5.0
setproctitle==1.1.10
simplegeneric==0.8.1
simplejson==3.14.0
singledispatch==3.4.0.3
six==1.10.0
smmap2==2.0.3
sortedcontainers==1.5.10
SQLAlchemy==1.2.7
statsmodels==0.8.0
subprocess32==3.2.7
sympy==0.7.6.1
tabulate==0.7.7
tblib==1.3.2
tensorboard==1.8.0
tensorflow==1.8.0
termcolor==1.1.0
terminado==0.8.1
testpath==0.3.1
thrift==0.11.0
toolz==0.9.0
tornado==4.5.1
traitlets==4.3.2
Unidecode==1.0.22
uritemplate==3.0.0
urllib3==1.22
wcwidth==0.1.7
webencodings==0.5.1
Werkzeug==0.14.1
widgetsnbextension==3.2.1
WTForms==2.1
xgboost==0.6a2
zict==0.1.3
zope.deprecation==4.3.0

Reply via email to