Regarding scheduling the DAG with trigger_dag command

2016-11-10 Thread twinkle sachdeva
Hi,

After reading the blog for Tips and Tricks of Airflow, i was trying my
hands on trigger_dag run command.

But this only creates an instance of dag with status running. But actually
no task starts getting executed.
Whereas with backfill, it schedules the dag there and then.

How can i force the tasks of the dag to start via trigger_dag command?

Regards,
Twinkle


Re: Logging to postgres as super user

2016-11-07 Thread twinkle sachdeva
Hi,

I used the following code to do the bulk upload in postgres ( It does not
have any super user issue ):

def bulk_load(self, table, tmp_file):

"""

Loads a tab-delimited file into a database table

"""

conn = self.get_conn()

cur = conn.cursor()

sqlStr = "COPY " + table + " FROM STDIN"

with open(tmp_file) as f :

 cur.copy_expert(sqlStr, f)

conn.commit()


Regards,

Twinkle

On Thu, Nov 3, 2016 at 9:10 PM, Laura Lorenz 
wrote:

> You'll need to call your task with a postgres connection that authenticates
> as a user that already has the privileges (e.g. superuser privileges) it
> needs. You can change the connection details for a database connection
> either airflow-wide by changing the default connection through the UI
> <https://airflow.incubator.apache.org/configuration.html#connections> or
> with an environment variable, or pass a non-default connection through to
> the specific task's operator
> <https://airflow.incubator.apache.org/code.html#airflow.
> operators.PostgresOperator>,
> or if you instantiate the hook yourself, the hook's get_connection method
> <https://github.com/apache/incubator-airflow/blob/master/
> airflow/hooks/base_hook.py#L59>.
> You'd need to already have the credentials to an appropriately provisioned
> postgres user; otherwise you'll need to create or modify a user with the
> appropriate privileges on your postgres server directly in the first place.
>
> On Wed, Nov 2, 2016 at 7:31 AM, twinkle sachdeva <
> twinkle.sachd...@gmail.com
> > wrote:
>
> > Hi,
> >
> > I was trying to do bulk upload via postgres using it's copy command.
> >
> > While doing so, i received message :
> >
> > airflow/hooks/postgres_hook.py", line 58, in bulk_load
> >
> > cur.execute("COPY %s FROM '%s'" % (table, tmp_file))
> >
> > psycopg2.ProgrammingError: must be superuser to COPY to or from a file
> >
> > HINT:  Anyone can COPY to stdout or from stdin. psql's \copy command also
> > works for anyone.
> >
> > While browsing, i got to know that i can use alter user command, to
> assign
> > superuser status to the current user.
> >
> > While trying to execute that, i am receiving following error :
> >
> > cur.execute("alter user smrt_repl_rt superuser;")
> >
> > psycopg2.ProgrammingError: must be superuser to alter superusers
> >
> >
> > Is there a way, i can connect to database as super user via some other
> > arguments?
> >
> >
> > Regards,
> >
> > Twinkle
> >
>


Re: Airflow state change diagram

2016-11-02 Thread twinkle sachdeva
Thanks Gerard for sharing it.

Regards,
Twinkle

On Mon, Oct 31, 2016 at 2:35 AM, Gerard Toonstra 
wrote:

> I was looking at trying to fix AIRFLOW-137 (max_active_runs not respected),
> but quickly noticed that the code that does all the scheduling is rather
> complex with state updates going on across multiple source files in
> multiple threads, etc.
>
> It's then best to find a suitable way to visualize all this complexity, so
> I built this state change diagram:
>
> https://docs.google.com/spreadsheets/d/1vVvOwfDSacTC_YzwUkOMyykP6LiipCeoW_
> V70PuFrN4/edit?usp=sharing
>
> The state changes represent a potential execution path where the state for
> a task instance will be updated to that value. Backfill is not considered
> in this diagram. States for dagruns/jobs/dags are also not considered.
>
> Could be useful for someone else.
>
> Rgds,
>
> Gerard
>


Logging to postgres as super user

2016-11-02 Thread twinkle sachdeva
Hi,

I was trying to do bulk upload via postgres using it's copy command.

While doing so, i received message :

airflow/hooks/postgres_hook.py", line 58, in bulk_load

cur.execute("COPY %s FROM '%s'" % (table, tmp_file))

psycopg2.ProgrammingError: must be superuser to COPY to or from a file

HINT:  Anyone can COPY to stdout or from stdin. psql's \copy command also
works for anyone.

While browsing, i got to know that i can use alter user command, to assign
superuser status to the current user.

While trying to execute that, i am receiving following error :

cur.execute("alter user smrt_repl_rt superuser;")

psycopg2.ProgrammingError: must be superuser to alter superusers


Is there a way, i can connect to database as super user via some other
arguments?


Regards,

Twinkle


Re: retry handler not getting called

2016-10-27 Thread twinkle sachdeva
Please ignore.

It is being called.

On Thu, Oct 27, 2016 at 2:39 PM, twinkle sachdeva <
twinkle.sachd...@gmail.com> wrote:

> Hi,
>
> I am working with a hiveToMySql transfer operation, which is not able to
> connect and gives following logs:
>
> HiveServer2Error: Failed after retrying 3 times
>
> [2016-10-27 01:54:46,524] {models.py:1298} INFO - Marking task as
> UP_FOR_RETRY
>
>
> As per the code in the models.py, https://github.com/apache/
> incubator-airflow/blob/master/airflow/models.py#L1347 , it should be
> calling the code for retry handler ( we are using version 1.7.1.3), but it
> is not happening.
>
>
> Here is the relevant code snippet:
>
>
> def my_retry_handler(context):
>
>  print "my_retry_handler called" slack_retry_notification =
> SlackAPIPostOperator( task_id='Slack_Failure_Notification',
>
>  token="XX",
>
>  channel='@yy',
>
>  text=":skull: - {time} - {dag} attempt has been failed
> ".format(dag='Some Dag',time=datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
> ),
>
>  owner='Retry Handler')
>
>  return slack_retry_notification.execute
>
>
> hiveToMySql = HiveToMYSQLTransfer(sql="jdbsd", hiveserver2_conn_id='hivese
> rver2_default',.
>
> on_retry_callback=my_retry_handler)
>
>
> Is there something i am missing?
>
>
>
> Regards,
>
> Twinkle
>


retry handler not getting called

2016-10-27 Thread twinkle sachdeva
Hi,

I am working with a hiveToMySql transfer operation, which is not able to
connect and gives following logs:

HiveServer2Error: Failed after retrying 3 times

[2016-10-27 01:54:46,524] {models.py:1298} INFO - Marking task as
UP_FOR_RETRY


As per the code in the models.py,
https://github.com/apache/incubator-airflow/blob/master/airflow/models.py#L1347
, it should be calling the code for retry handler ( we are using version
1.7.1.3), but it is not happening.


Here is the relevant code snippet:


def my_retry_handler(context):

 print "my_retry_handler called" slack_retry_notification =
SlackAPIPostOperator( task_id='Slack_Failure_Notification',

 token="XX",

 channel='@yy',

 text=":skull: - {time} - {dag} attempt has been failed
".format(dag='Some Dag',time=datetime.now().strftime("%Y-%m-%d %H:%M:%S"),
),

 owner='Retry Handler')

 return slack_retry_notification.execute


hiveToMySql = HiveToMYSQLTransfer(sql="jdbsd", hiveserver2_conn_id='
hiveserver2_default',.

on_retry_callback=my_retry_handler)


Is there something i am missing?



Regards,

Twinkle


Re: Regarding hive server2

2016-10-27 Thread twinkle sachdeva
Hi Maxime,

Before this setting, i was getting following exception:

 File "/home/xxx/.pyenv/versions/2.7.12/lib/python2.7/site-
packages/pyhs2/cloudera/thrift_sasl.py", line 66, in open

message=("Could not start SASL: %s" % self.sasl.getError()))

thrift.transport.TTransport.TTransportException: Could not start SASL:
Error in sasl_client_start (-4) SASL(-4): no mechanism available: No worthy
mechs found


After using the NOSASL setting, I am getting following exception:

File "/home/xxx/.pyenv/versions/2.7.12/lib/python2.7/site-
packages/pyhs2/TCLIService/TCLIService.py", line 154, in OpenSession

return self.recv_OpenSession()

  File "/home/xxx/.pyenv/versions/2.7.12/lib/python2.7/site-
packages/pyhs2/TCLIService/TCLIService.py", line 165, in recv_OpenSession

(fname, mtype, rseqid) = self._iprot.readMessageBegin()

  File "/home/xxx/.pyenv/versions/2.7.12/lib/python2.7/site-
packages/thrift/protocol/TBinaryProtocol.py", line 140, in readMessageBegin

name = self.trans.readAll(sz)

  File "/home/xxx/.pyenv/versions/2.7.12/lib/python2.7/site-
packages/thrift/transport/TTransport.py", line 58, in readAll

chunk = self.read(sz - have)

  File "/home/xxx/.pyenv/versions/2.7.12/lib/python2.7/site-
packages/thrift/transport/TTransport.py", line 159, in read

self.__rbuf = StringIO(self.__trans.read(max(sz, self.__rbuf_size)))

  File "/home/xxx/.pyenv/versions/2.7.12/lib/python2.7/site-
packages/thrift/transport/TSocket.py", line 118, in read

message='TSocket read 0 bytes')

thrift.transport.TTransport.TTransportException: TSocket read 0 bytes.


I did checked, if the hive server is running or not.

I am getting the same issue, if i try to connect using simple python
program also.

On Wed, Oct 26, 2016 at 9:17 PM, Maxime Beauchemin <
maximebeauche...@gmail.com> wrote:

> From memory, I think this is related to having the wrong authentication
> method.
> https://github.com/apache/incubator-airflow/blob/master/
> airflow/hooks/hive_hooks.py#L578
>
> You may want to try NOSASL. To do that i think you have to put something
> like `{ "authMechanism": "NOSASL" }` in your Connection's extra params.
>
> On Wed, Oct 26, 2016 at 1:50 AM, twinkle sachdeva <
> twinkle.sachd...@gmail.com> wrote:
>
> > Hi,
> >
> > I am trying to use HiveToMySqlTransfer operator, but I am not able to
> read
> > any data with the following configuration:
> >
> > TSocket.py", line 120, in read
> >
> > message='TSocket read 0 bytes')
> >
> > thrift.transport.TTransport.TTransportException: TSocket read 0 bytes
> >
> > It seems to happen due to some mismatch in thrift protocol etc
> > specification.
> >
> > Please help me on what can be done.
> >
> >
> > Regards,
> >
> > Twinkle
> >
>


Regarding hive server2

2016-10-26 Thread twinkle sachdeva
Hi,

I am trying to use HiveToMySqlTransfer operator, but I am not able to read
any data with the following configuration:

TSocket.py", line 120, in read

message='TSocket read 0 bytes')

thrift.transport.TTransport.TTransportException: TSocket read 0 bytes

It seems to happen due to some mismatch in thrift protocol etc
specification.

Please help me on what can be done.


Regards,

Twinkle


Regarding running of the task in airflow

2016-10-21 Thread twinkle sachdeva
Hi,

I have scheduled a job at 3:00 a.m. daily.

When i go the detailed view of the dag, i see the instance of the execution
from the start date to yesterday only even if i am viewing it at 11:00 a.m.

Any pointers?

P.S: In the pitfalls document (
https://medium.com/handy-tech/airflow-tips-tricks-and-pitfalls-9ba53fba14eb#.x4jwbmw0b)
, it has been mentioned that DAG execution happens at the day end and not
at day start. Unfortunately, I am not able to relate with the issue i am
facing.

Regards,
Twinkle


Re: Regarding force running the whole graph from UI

2016-10-21 Thread twinkle sachdeva
@Shin: Nice find.

@Alex : Thanks for sharing the thread. I think it will be a nice
enhancement for the teams, specially switching from the technical stack
where it has been a feature.

On Fri, Oct 21, 2016 at 4:05 PM,  wrote:

> Nope, that’s all Airflow can do right now per UI. We had a discussion
> about in July:
>
> https://mail-archives.apache.org/mod_mbox/incubator-
> airflow-dev/201607.mbox/browser
>
>
>
> cheers,
>
> --alex
>
>
>
> --
> B: mapredit.blogspot.com
>
>
>
> *From: *twinkle sachdeva 
> *Sent: *Friday, October 21, 2016 12:23 PM
>
> *To: *wget.n...@gmail.com
> *Cc: *dev@airflow.incubator.apache.org
> *Subject: *Re: Regarding force running the whole graph from UI
>
>
>
>
>
> It is not triggering the rest of the graph.
>
> It leads to execution of first task only.
>
>
>
> Are there some other flags / Layout change which needs to be done?
>
>
>
>
>
> On Fri, Oct 21, 2016 at 3:45 PM,  wrote:
>
> In Airflows main window, click at the DAG and you see the graph view.
> Clicking on one of the tasks should open a window with the possibility to
> “Run” that task. When you use the first task, then the whole DAG will be
> executed (given no errors occur).
>
>
>
> I hope that’s what you’re looking for 😊
>
>
>
> Cheers,
>
>  --alex
>
>
>
> --
> B: mapredit.blogspot.com
>
>
>
> *From: *twinkle sachdeva 
> *Sent: *Friday, October 21, 2016 12:09 PM
>
>
> *To: *wget.n...@gmail.com
> *Cc: *dev@airflow.incubator.apache.org
> *Subject: *Re: Regarding force running the whole graph from UI
>
>
>
> Hi Alex,
>
>
>
> We are using Celery Executor.
>
> I am not able to identify how can i do it from UI. Is it possible?
>
>
>
> Command line is awesome.
>
>
>
> Regards,
>
> Twinkle
>
>
>
>
>
> On Fri, Oct 21, 2016 at 3:36 PM,  wrote:
>
> Hey Twinkle,
>
>
>
> Triggering DAG’s per UI works only by using the CeleryExecutor in
> airflow.cfg, probably with the mesos one, too. Both are execute tasks
> remotely.
>
>
>
> P.S: Personally I favorite LocalExecutor and trigger DAG’s per CLI.
>
>
>
> Cheers,
>
> --alex
>
>
>
> --
> B: mapredit.blogspot.com
>
>
>
> *From: *twinkle sachdeva 
> *Sent: *Friday, October 21, 2016 11:58 AM
> *To: *wget.n...@gmail.com
> *Cc: *dev@airflow.incubator.apache.org
> *Subject: *Re: Regarding force running the whole graph from UI
>
>
>
> Hi Alex,
>
>
>
> I mean the complete DAG.
>
> In technical terms, being able to run 'trigger_dag' command from the UI.
>
>
>
> Regards,
>
> Twinkle
>
>
>
>
>
> On Fri, Oct 21, 2016 at 2:47 PM,  wrote:
>
> Hey,
>
>
>
> Something like this:
>
> https://pythonhosted.org/airflow/cli.html ?
>
>
>
> What do you mean with the whole graph? The complete DAG, or a task from a
> specific DAG?
>
>
>
> --alex
>
>
>
> --
>
> B: https://mapredit.blogspot.com
>
>
>
> *From: *twinkle sachdeva 
> *Sent: *Friday, October 21, 2016 8:30 AM
> *To: *dev@airflow.incubator.apache.org
> *Subject: *Regarding force running the whole graph from UI
>
>
>
> Hi,
>
>
>
> Is there a way by which we can force run the whole graph from Airflow UI?
>
>
>
> Also, is there any documentation available regarding all the options which
>
> are there in the pop-up dialog for running the graph?
>
>
>
> Thanks & Regards,
>
> Twinkle
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>


Re: Regarding force running the whole graph from UI

2016-10-21 Thread twinkle sachdeva
It is not triggering the rest of the graph.
It leads to execution of first task only.

Are there some other flags / Layout change which needs to be done?


On Fri, Oct 21, 2016 at 3:45 PM,  wrote:

> In Airflows main window, click at the DAG and you see the graph view.
> Clicking on one of the tasks should open a window with the possibility to
> “Run” that task. When you use the first task, then the whole DAG will be
> executed (given no errors occur).
>
>
>
> I hope that’s what you’re looking for 😊
>
>
>
> Cheers,
>
>  --alex
>
>
>
> --
> B: mapredit.blogspot.com
>
>
>
> *From: *twinkle sachdeva 
> *Sent: *Friday, October 21, 2016 12:09 PM
>
> *To: *wget.n...@gmail.com
> *Cc: *dev@airflow.incubator.apache.org
> *Subject: *Re: Regarding force running the whole graph from UI
>
>
>
> Hi Alex,
>
>
>
> We are using Celery Executor.
>
> I am not able to identify how can i do it from UI. Is it possible?
>
>
>
> Command line is awesome.
>
>
>
> Regards,
>
> Twinkle
>
>
>
>
>
> On Fri, Oct 21, 2016 at 3:36 PM,  wrote:
>
> Hey Twinkle,
>
>
>
> Triggering DAG’s per UI works only by using the CeleryExecutor in
> airflow.cfg, probably with the mesos one, too. Both are execute tasks
> remotely.
>
>
>
> P.S: Personally I favorite LocalExecutor and trigger DAG’s per CLI.
>
>
>
> Cheers,
>
> --alex
>
>
>
> --
> B: mapredit.blogspot.com
>
>
>
> *From: *twinkle sachdeva 
> *Sent: *Friday, October 21, 2016 11:58 AM
> *To: *wget.n...@gmail.com
> *Cc: *dev@airflow.incubator.apache.org
> *Subject: *Re: Regarding force running the whole graph from UI
>
>
>
> Hi Alex,
>
>
>
> I mean the complete DAG.
>
> In technical terms, being able to run 'trigger_dag' command from the UI.
>
>
>
> Regards,
>
> Twinkle
>
>
>
>
>
> On Fri, Oct 21, 2016 at 2:47 PM,  wrote:
>
> Hey,
>
>
>
> Something like this:
>
> https://pythonhosted.org/airflow/cli.html ?
>
>
>
> What do you mean with the whole graph? The complete DAG, or a task from a
> specific DAG?
>
>
>
> --alex
>
>
>
> --
>
> B: https://mapredit.blogspot.com
>
>
>
> *From: *twinkle sachdeva 
> *Sent: *Friday, October 21, 2016 8:30 AM
> *To: *dev@airflow.incubator.apache.org
> *Subject: *Regarding force running the whole graph from UI
>
>
>
> Hi,
>
>
>
> Is there a way by which we can force run the whole graph from Airflow UI?
>
>
>
> Also, is there any documentation available regarding all the options which
>
> are there in the pop-up dialog for running the graph?
>
>
>
> Thanks & Regards,
>
> Twinkle
>
>
>
>
>
>
>
>
>
>
>


Re: Regarding force running the whole graph from UI

2016-10-21 Thread twinkle sachdeva
Hi Alex,

We are using Celery Executor.
I am not able to identify how can i do it from UI. Is it possible?

Command line is awesome.

Regards,
Twinkle


On Fri, Oct 21, 2016 at 3:36 PM,  wrote:

> Hey Twinkle,
>
>
>
> Triggering DAG’s per UI works only by using the CeleryExecutor in
> airflow.cfg, probably with the mesos one, too. Both are execute tasks
> remotely.
>
>
>
> P.S: Personally I favorite LocalExecutor and trigger DAG’s per CLI.
>
>
>
> Cheers,
>
> --alex
>
>
>
> --
> B: mapredit.blogspot.com
>
>
>
> *From: *twinkle sachdeva 
> *Sent: *Friday, October 21, 2016 11:58 AM
> *To: *wget.n...@gmail.com
> *Cc: *dev@airflow.incubator.apache.org
> *Subject: *Re: Regarding force running the whole graph from UI
>
>
>
> Hi Alex,
>
>
>
> I mean the complete DAG.
>
> In technical terms, being able to run 'trigger_dag' command from the UI.
>
>
>
> Regards,
>
> Twinkle
>
>
>
>
>
> On Fri, Oct 21, 2016 at 2:47 PM,  wrote:
>
> Hey,
>
>
>
> Something like this:
>
> https://pythonhosted.org/airflow/cli.html ?
>
>
>
> What do you mean with the whole graph? The complete DAG, or a task from a
> specific DAG?
>
>
>
> --alex
>
>
>
> --
>
> B: https://mapredit.blogspot.com
>
>
>
> *From: *twinkle sachdeva 
> *Sent: *Friday, October 21, 2016 8:30 AM
> *To: *dev@airflow.incubator.apache.org
> *Subject: *Regarding force running the whole graph from UI
>
>
>
> Hi,
>
>
>
> Is there a way by which we can force run the whole graph from Airflow UI?
>
>
>
> Also, is there any documentation available regarding all the options which
>
> are there in the pop-up dialog for running the graph?
>
>
>
> Thanks & Regards,
>
> Twinkle
>
>
>
>
>
>
>


Re: Regarding force running the whole graph from UI

2016-10-21 Thread twinkle sachdeva
Hi Alex,

I mean the complete DAG.
In technical terms, being able to run 'trigger_dag' command from the UI.

Regards,
Twinkle


On Fri, Oct 21, 2016 at 2:47 PM,  wrote:

> Hey,
>
>
>
> Something like this:
>
> https://pythonhosted.org/airflow/cli.html ?
>
>
>
> What do you mean with the whole graph? The complete DAG, or a task from a
> specific DAG?
>
>
>
> --alex
>
>
>
> --
>
> B: https://mapredit.blogspot.com
>
>
>
> *From: *twinkle sachdeva 
> *Sent: *Friday, October 21, 2016 8:30 AM
> *To: *dev@airflow.incubator.apache.org
> *Subject: *Regarding force running the whole graph from UI
>
>
>
> Hi,
>
>
>
> Is there a way by which we can force run the whole graph from Airflow UI?
>
>
>
> Also, is there any documentation available regarding all the options which
>
> are there in the pop-up dialog for running the graph?
>
>
>
> Thanks & Regards,
>
> Twinkle
>
>
>


Regarding force running the whole graph from UI

2016-10-20 Thread twinkle sachdeva
Hi,

Is there a way by which we can force run the whole graph from Airflow UI?

Also, is there any documentation available regarding all the options which
are there in the pop-up dialog for running the graph?

Thanks & Regards,
Twinkle


Re: Regarding installation issue

2016-10-03 Thread twinkle sachdeva
Thanks Bolke. I will upgrade to python 2.7 and try.

Regards,
Twinkle

On Oct 3, 2016 6:28 PM, "Bolke de Bruin"  wrote:

> We don’t support python 2.6 .
>
> Please upgrade to at least 2.7.
>
> Bolke
>
> > Op 3 okt. 2016, om 08:23 heeft twinkle sachdeva <
> twinkle.sachd...@gmail.com> het volgende geschreven:
> >
> > Hi,
> >
> > I have been trying to install Airflow on one of the VMs ( Python version
> :
> > 2.6.6 and Pip Version : 7.1.0 ).
> >
> > I am getting the following stack trace  ->
> >
> > creating build/docs
> >
> >copying docs/index.txt -> build/docs
> >
> >Converting docs/index.txt -> build/docs/index.html
> >
> >Traceback (most recent call last):
> >
> >  File "", line 1, in 
> >
> >  File "/tmp/pip-build-pCrt4S/markdown/setup.py", line 270, in
> 
> >
> >'Topic :: Text Processing :: Markup :: HTML'
> >
> >  File "/usr/lib64/python2.6/distutils/core.py", line 152, in setup
> >
> >dist.run_commands()
> >
> >  File "/usr/lib64/python2.6/distutils/dist.py", line 975, in
> > run_commands
> >
> >self.run_command(cmd)
> >
> >  File "/usr/lib64/python2.6/distutils/dist.py", line 995, in
> > run_command
> >
> >cmd_obj.run()
> >
> >  File
> > "/usr/lib/python2.6/site-packages/setuptools/command/install.py", line
> 53,
> > in run
> >
> >return _install.run(self)
> >
> >  File "/usr/lib64/python2.6/distutils/command/install.py", line 577,
> > in run
> >
> >self.run_command('build')
> >
> >  File "/usr/lib64/python2.6/distutils/cmd.py", line 333, in
> run_command
> >
> >self.distribution.run_command(command)
> >
> >  File "/usr/lib64/python2.6/distutils/dist.py", line 995, in
> > run_command
> >
> >cmd_obj.run()
> >
> >  File "/usr/lib64/python2.6/distutils/command/build.py", line 134,
> in
> > run
> >
> >self.run_command(cmd_name)
> >
> >  File "/usr/lib64/python2.6/distutils/cmd.py", line 333, in
> run_command
> >
> >self.distribution.run_command(command)
> >
> >  File "/usr/lib64/python2.6/distutils/dist.py", line 995, in
> > run_command
> >
> >cmd_obj.run()
> >
> >  File "/tmp/pip-build-pCrt4S/markdown/setup.py", line 184, in run
> >
> >out = template % self._get_context(src, outfile)
> >
> >  File "/tmp/pip-build-pCrt4S/markdown/setup.py", line 116, in
> > _get_context
> >
> >c['body'] = self.md.convert(src)
> >
> >  File "build/lib/markdown/__init__.py", line 375, in convert
> >
> >newRoot = treeprocessor.run(root)
> >
> >  File "build/lib/markdown/extensions/toc.py", line 229, in run
> >
> >for el in doc.iter():
> >
> >AttributeError: iter
> >
> >
> >
> >
> >
> > Command "/usr/bin/python -c "import setuptools,
> > tokenize;__file__='/tmp/pip-build-pCrt4S/markdown/setup.
> py';exec(compile(getattr(tokenize,
> > 'open', open)(__file__).read().replace('\r\n', '\n'), __file__,
> 'exec'))"
> > install --record /tmp/pip-RYL54E-record/install-record.txt
> > --single-version-externally-managed --compile" failed with error code 1
> in
> > /tmp/pip-build-pCrt4S/markdown
> >
> >
> >
> > Please help.
> >
> >
> > Regards,
> >
> > Twinkle
>
>


Regarding installation issue

2016-10-02 Thread twinkle sachdeva
Hi,

I have been trying to install Airflow on one of the VMs ( Python version :
2.6.6 and Pip Version : 7.1.0 ).

I am getting the following stack trace  ->

creating build/docs

copying docs/index.txt -> build/docs

Converting docs/index.txt -> build/docs/index.html

Traceback (most recent call last):

  File "", line 1, in 

  File "/tmp/pip-build-pCrt4S/markdown/setup.py", line 270, in 

'Topic :: Text Processing :: Markup :: HTML'

  File "/usr/lib64/python2.6/distutils/core.py", line 152, in setup

dist.run_commands()

  File "/usr/lib64/python2.6/distutils/dist.py", line 975, in
run_commands

self.run_command(cmd)

  File "/usr/lib64/python2.6/distutils/dist.py", line 995, in
run_command

cmd_obj.run()

  File
"/usr/lib/python2.6/site-packages/setuptools/command/install.py", line 53,
in run

return _install.run(self)

  File "/usr/lib64/python2.6/distutils/command/install.py", line 577,
in run

self.run_command('build')

  File "/usr/lib64/python2.6/distutils/cmd.py", line 333, in run_command

self.distribution.run_command(command)

  File "/usr/lib64/python2.6/distutils/dist.py", line 995, in
run_command

cmd_obj.run()

  File "/usr/lib64/python2.6/distutils/command/build.py", line 134, in
run

self.run_command(cmd_name)

  File "/usr/lib64/python2.6/distutils/cmd.py", line 333, in run_command

self.distribution.run_command(command)

  File "/usr/lib64/python2.6/distutils/dist.py", line 995, in
run_command

cmd_obj.run()

  File "/tmp/pip-build-pCrt4S/markdown/setup.py", line 184, in run

out = template % self._get_context(src, outfile)

  File "/tmp/pip-build-pCrt4S/markdown/setup.py", line 116, in
_get_context

c['body'] = self.md.convert(src)

  File "build/lib/markdown/__init__.py", line 375, in convert

newRoot = treeprocessor.run(root)

  File "build/lib/markdown/extensions/toc.py", line 229, in run

for el in doc.iter():

AttributeError: iter





Command "/usr/bin/python -c "import setuptools,
tokenize;__file__='/tmp/pip-build-pCrt4S/markdown/setup.py';exec(compile(getattr(tokenize,
'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))"
install --record /tmp/pip-RYL54E-record/install-record.txt
--single-version-externally-managed --compile" failed with error code 1 in
/tmp/pip-build-pCrt4S/markdown



Please help.


Regards,

Twinkle