Re: DataNode not recognising available disk space

2014-09-17 Thread Charles Robertson
Hi Yusaku,

Thank you for your reply, unfortunately that's not the problem - the new
slave node has a single drive, so is using the default data directory path.
I'll post this to the Ambari list.

Regards,
Charles

On 16 September 2014 23:12, Yusaku Sako yus...@hortonworks.com wrote:

 Charles,

 If the newly added slave node has an extra disk that the other nodes
 don't have, then you will have to do the following on the Ambari Web
 UI so that dfs.data.dir is reflected to include this extra drive for
 that node:
 * Go to Services  HDFS  Configs
 * Click on Manage Config Groups link
 * Create a new HDFS Configuration Group (name it appropriately), add
 the slave host in question to that group, and Save.
 * In the list of configs for HDFS, you will see DataNode directories
 with the current directories set up for DataNodes for all other hosts.
 * Click on the + next to DataNode directories and list out all the
 drives for this new DataNode.
 * Save the configuration changes.
 * Restart the DataNode.
 * Ambari (and HDFS) should start reflecting the added disk space.

 I hope this solves your problem.

 FYI, there's an Ambari-specific mailing list at u...@ambari.apache.org
 that you can subscribe by sending an email to
 user-subscr...@ambari.apache.org.

 Thanks,
 Yusaku


 On Tue, Sep 16, 2014 at 5:28 AM, Charles Robertson
 charles.robert...@gmail.com wrote:
  Hi all,
 
  I've added a new slave node to my cluster with a (single) larger disk
 size
  (100Gb) than on the other nodes. However, Amabri is reporting a total of
 8.6
  Gb disk space. lsblk correctly reports the disk size.
 
  Does anyone why this might be? As I understand things you need to tell
 HDFS
  how much space *not* to use, and it will use what it needs from the rest.
 
  (BTW this is not the same data node in my other question on this list,
  Cannot start DataNode after adding new volume.)
 
  Thanks,
  Charles

 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.



Cannot start DataNode after adding new volume

2014-09-16 Thread Charles Robertson
Hi all,

I am running out of space on a data node, so added a new volume to the
host, mounted it and made sure the permissions were set OK. Then I updated
the 'DataNode Directories' property in Ambari to include the new path
(comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted
the components with stale configs for that host, but the DataNode wouldn't
come back up, reporting 'connection refused'. When I remove the new data
directory path from the property and restart, it starts fine.

What am I doing wrong?

Thanks,
Charles


Re: Cannot start DataNode after adding new volume

2014-09-16 Thread Charles Robertson
Hi Susheel,

Thanks for the reply. I'm not entirely sure what you mean.

When I created the new directory on the new volume I simply created an
empty directory. I see from the existing data node directory that it has a
sub-directory called current containing a file called VERSION.

Your advice is to create the 'current' sub-directory and copy the VERSION
file across to it without changes? I see it has various guids, and so I'm
worried about it clashing with the VERSION file in the other data directory.

Thanks,
Charles

On 16 September 2014 10:57, Susheel Kumar Gadalay skgada...@gmail.com
wrote:

 Is it something to do current/VERSION file in data node directory.

 Just copy from the existing directory and start.

 On 9/16/14, Charles Robertson charles.robert...@gmail.com wrote:
  Hi all,
 
  I am running out of space on a data node, so added a new volume to the
  host, mounted it and made sure the permissions were set OK. Then I
 updated
  the 'DataNode Directories' property in Ambari to include the new path
  (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I restarted
  the components with stale configs for that host, but the DataNode
 wouldn't
  come back up, reporting 'connection refused'. When I remove the new data
  directory path from the property and restart, it starts fine.
 
  What am I doing wrong?
 
  Thanks,
  Charles
 



Re: Cannot start DataNode after adding new volume

2014-09-16 Thread Charles Robertson
Hi Susheel,

Tried that - same result. DataNode still not starting.

Thanks,
Charles

On 16 September 2014 11:49, Susheel Kumar Gadalay skgada...@gmail.com
wrote:

 The VERSION file has to be same across all the data nodes directories.

 So I suggested to copy it as it is using OS command and start data node.

 On 9/16/14, Charles Robertson charles.robert...@gmail.com wrote:
  Hi Susheel,
 
  Thanks for the reply. I'm not entirely sure what you mean.
 
  When I created the new directory on the new volume I simply created an
  empty directory. I see from the existing data node directory that it has
 a
  sub-directory called current containing a file called VERSION.
 
  Your advice is to create the 'current' sub-directory and copy the VERSION
  file across to it without changes? I see it has various guids, and so I'm
  worried about it clashing with the VERSION file in the other data
  directory.
 
  Thanks,
  Charles
 
  On 16 September 2014 10:57, Susheel Kumar Gadalay skgada...@gmail.com
  wrote:
 
  Is it something to do current/VERSION file in data node directory.
 
  Just copy from the existing directory and start.
 
  On 9/16/14, Charles Robertson charles.robert...@gmail.com wrote:
   Hi all,
  
   I am running out of space on a data node, so added a new volume to the
   host, mounted it and made sure the permissions were set OK. Then I
  updated
   the 'DataNode Directories' property in Ambari to include the new path
   (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
   restarted
   the components with stale configs for that host, but the DataNode
  wouldn't
   come back up, reporting 'connection refused'. When I remove the new
   data
   directory path from the property and restart, it starts fine.
  
   What am I doing wrong?
  
   Thanks,
   Charles
  
 
 



DataNode not recognising available disk space

2014-09-16 Thread Charles Robertson
Hi all,

I've added a new slave node to my cluster with a (single) larger disk size
(100Gb) than on the other nodes. However, Amabri is reporting a total of
8.6 Gb disk space. lsblk correctly reports the disk size.

Does anyone why this might be? As I understand things you need to tell HDFS
how much space *not* to use, and it will use what it needs from the rest.

(BTW this is not the same data node in my other question on this list,
Cannot start DataNode after adding new volume.)

Thanks,
Charles


Re: Cannot start DataNode after adding new volume

2014-09-16 Thread Charles Robertson
I've found this in the logs:

014-09-16 11:00:31,287 INFO  datanode.DataNode
(SignalLogger.java:register(91)) - registered UNIX signal handlers for
[TERM, HUP, INT]
2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
Path /hadoop/hdfs/data should be specified as a URI in configuration files.
Please update hdfs configuration.
2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
Path  should be specified as a URI in configuration files. Please update
hdfs configuration.
2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
Path /data/hdfs should be specified as a URI in configuration files. Please
update hdfs configuration.
2014-09-16 11:00:32,277 WARN  datanode.DataNode
(DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
/usr/lib/hadoop :
EPERM: Operation not permitted
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
at
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
at
org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
at
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
at
org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)

/hadoop/hdfs/data is the original default value. /data/hdfs is the the path
I have added. All the documentation says it can be a comma-delimited list
of paths but this log is complaining it's not a URI? When it's
/hadoop/hdfs/data on its own it starts fine...?

Regards,
Charles

On 16 September 2014 12:08, Charles Robertson charles.robert...@gmail.com
wrote:

 Hi Susheel,

 Tried that - same result. DataNode still not starting.

 Thanks,
 Charles

 On 16 September 2014 11:49, Susheel Kumar Gadalay skgada...@gmail.com
 wrote:

 The VERSION file has to be same across all the data nodes directories.

 So I suggested to copy it as it is using OS command and start data node.

 On 9/16/14, Charles Robertson charles.robert...@gmail.com wrote:
  Hi Susheel,
 
  Thanks for the reply. I'm not entirely sure what you mean.
 
  When I created the new directory on the new volume I simply created an
  empty directory. I see from the existing data node directory that it
 has a
  sub-directory called current containing a file called VERSION.
 
  Your advice is to create the 'current' sub-directory and copy the
 VERSION
  file across to it without changes? I see it has various guids, and so
 I'm
  worried about it clashing with the VERSION file in the other data
  directory.
 
  Thanks,
  Charles
 
  On 16 September 2014 10:57, Susheel Kumar Gadalay skgada...@gmail.com
  wrote:
 
  Is it something to do current/VERSION file in data node directory.
 
  Just copy from the existing directory and start.
 
  On 9/16/14, Charles Robertson charles.robert...@gmail.com wrote:
   Hi all,
  
   I am running out of space on a data node, so added a new volume to
 the
   host, mounted it and made sure the permissions were set OK. Then I
  updated
   the 'DataNode Directories' property in Ambari to include the new path
   (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
   restarted
   the components with stale configs for that host, but the DataNode
  wouldn't
   come back up, reporting 'connection refused'. When I remove the new
   data
   directory path from the property and restart, it starts fine.
  
   What am I doing wrong?
  
   Thanks,
   Charles
  
 
 





Re: Cannot start DataNode after adding new volume

2014-09-16 Thread Charles Robertson
Hi Samir,

That was it - I changed ownership of the /usr/lib/hadoop dir to hdfs:hadoop
and tried again and the DataNode has started successfully.

Thank you!
Charles

On 16 September 2014 13:47, Samir Ahmic ahmic.sa...@gmail.com wrote:

 Hi Charles,

 From log it looks like that DataNode process don't have permissions to
 write to /usr/lib/hadoop dir. Can you check permissions on 
 /usr/lib/hadoop for user under  which DataNode process is started.
 (probably hdfs user but not sure).

 Cheers
 Samir

 On Tue, Sep 16, 2014 at 2:40 PM, Charles Robertson 
 charles.robert...@gmail.com wrote:

 I've found this in the logs:

 014-09-16 11:00:31,287 INFO  datanode.DataNode
 (SignalLogger.java:register(91)) - registered UNIX signal handlers for
 [TERM, HUP, INT]
 2014-09-16 11:00:31,521 WARN  common.Util (Util.java:stringAsURI(56)) -
 Path /hadoop/hdfs/data should be specified as a URI in configuration files.
 Please update hdfs configuration.
 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
 Path  should be specified as a URI in configuration files. Please update
 hdfs configuration.
 2014-09-16 11:00:31,523 WARN  common.Util (Util.java:stringAsURI(56)) -
 Path /data/hdfs should be specified as a URI in configuration files. Please
 update hdfs configuration.
 2014-09-16 11:00:32,277 WARN  datanode.DataNode
 (DataNode.java:checkStorageLocations(1941)) - Invalid dfs.datanode.data.dir
 /usr/lib/hadoop :
 EPERM: Operation not permitted
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
 at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:226)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:629)
 at
 org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:467)
 at
 org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:126)
 at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:142)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:1896)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:1938)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1920)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1812)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1859)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2035)
 at
 org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2059)

 /hadoop/hdfs/data is the original default value. /data/hdfs is the the
 path I have added. All the documentation says it can be a comma-delimited
 list of paths but this log is complaining it's not a URI? When it's
 /hadoop/hdfs/data on its own it starts fine...?

 Regards,
 Charles

 On 16 September 2014 12:08, Charles Robertson 
 charles.robert...@gmail.com wrote:

 Hi Susheel,

 Tried that - same result. DataNode still not starting.

 Thanks,
 Charles

 On 16 September 2014 11:49, Susheel Kumar Gadalay skgada...@gmail.com
 wrote:

 The VERSION file has to be same across all the data nodes directories.

 So I suggested to copy it as it is using OS command and start data node.

 On 9/16/14, Charles Robertson charles.robert...@gmail.com wrote:
  Hi Susheel,
 
  Thanks for the reply. I'm not entirely sure what you mean.
 
  When I created the new directory on the new volume I simply created an
  empty directory. I see from the existing data node directory that it
 has a
  sub-directory called current containing a file called VERSION.
 
  Your advice is to create the 'current' sub-directory and copy the
 VERSION
  file across to it without changes? I see it has various guids, and so
 I'm
  worried about it clashing with the VERSION file in the other data
  directory.
 
  Thanks,
  Charles
 
  On 16 September 2014 10:57, Susheel Kumar Gadalay 
 skgada...@gmail.com
  wrote:
 
  Is it something to do current/VERSION file in data node directory.
 
  Just copy from the existing directory and start.
 
  On 9/16/14, Charles Robertson charles.robert...@gmail.com wrote:
   Hi all,
  
   I am running out of space on a data node, so added a new volume to
 the
   host, mounted it and made sure the permissions were set OK. Then I
  updated
   the 'DataNode Directories' property in Ambari to include the new
 path
   (comma separated, i.e. '/hadoop/hdfs/data,/data/hdfs'). Next I
   restarted
   the components with stale configs for that host, but the DataNode
  wouldn't
   come back up, reporting 'connection refused'. When I remove the new
   data
   directory path from the property and restart, it starts fine.
  
   What am I doing wrong?
  
   Thanks,
   Charles
  
 
 







Change master node with Ambari?

2014-09-11 Thread Charles Robertson
I'm playing around with a 2-node cluster running Hortonworks Data Platform
2.1, running on AWS EC2 instances. Today I discovered that for some reason
my master node had decided to no longer recognise my private key, so I
could no longer SSH in to the host to work on the project.

In this case it's not really a show-stopper as I have all the important
stuff saved elsewhere and I am rebuilding the ambari server node.

My question is, is it possible to change the master node and so preserve
the data in HDFS in this situation?

I could probably prevent this from happening again by having a secondary
authorised key for the host, but I would expect there should be some
mechanism to change the master node. Is there one? I couldn't find a way of
doing it through Ambari, so perhaps it's a more manual process?

Regards,
Charles


Re: Regular expressions in fs paths?

2014-09-10 Thread Charles Robertson
Hi Georgi,

Thanks for your reply. Won't hadoop fs -ls /tmp/myfiles* return all files
that begin with 'myfiles' in the tmp directory? What I don't understand is
how I can specify a pattern that excludes files ending in '.tmp'. I have
tried using the normal regular expression syntax for this ^(.tmp) but it
tries to match it literally.

Regards,
Charles

On 10 September 2014 13:07, Georgi Ivanov iva...@vesseltracker.com wrote:

  Yes you can :
 hadoop fs -ls /tmp/myfiles*

 I would recommend first using -ls in order to verify  you are selecting
 the right files.

 #Mahesh : do you need some help doing this ?



 On 10.09.2014 13:46, Mahesh Khandewal wrote:

 I want to unsubscribe from this mailing list

 On Wed, Sep 10, 2014 at 4:42 PM, Charles Robertson 
 charles.robert...@gmail.com wrote:

 Hi all,

  Is it possible to use regular expressions in fs commands? Specifically,
 I want to use the copy (-cp) and move (-mv) commands on all files in a
 directory that match a pattern (the pattern being all files that do not end
 in '.tmp').

  Can this be done?

  Thanks,
 Charles






Re: Regular expressions in fs paths?

2014-09-10 Thread Charles Robertson
I solved this in the end by using a shell script (initiated by an oozie
shell action) to use grep and loop through the results - didn't have to use
-v option, as the -e option gives you access to a fuller range of regular
expression functionality.

Thanks for your help (again!) Rich.

Charles

On 10 September 2014 16:50, Rich Haase rdha...@gmail.com wrote:

 HDFS doesn't support he full range of glob matching you will find in
 Linux.  If you want to exclude all files from a directory listing that meet
 a certain criteria try doing your listing and using grep -v to exclude the
 matching records.



Re: Map job not finishing

2014-09-06 Thread Charles Robertson
Hi Rich,

Default setup, so presumably one. I opted to add a node rather than change
the number of task trackers and it now runs successfully.

Thank you!
Charles


On 5 September 2014 16:44, Rich Haase rdha...@gmail.com wrote:

 How many tasktrackers do you have setup for your single node cluster?
  Oozie runs each action as a java program on an arbitrary cluster node, so
 running a workflow requires a minimum of two tasktrackers.


 On Fri, Sep 5, 2014 at 7:33 AM, Charles Robertson 
 charles.robert...@gmail.com wrote:

 Hi all,

 I'm using oozie to run a hive script, but the map job is not completing.
 The tracking page shows its progress as 100%, and there's no warnings or
 errors in the logs, it's just sitting there with a state of 'RUNNING'.

 As best I can make out from the logs, the last statement in the hive
 script has been successfully parsed and it tries to start the command,
 saying launching job 1 of 3. That job is sitting there in the ACCEPTED
 state, but doing nothing.

 This is on a single-node cluster running Hortonworks Data Platform 2.1.
 Can anyone suggest what might be the cause, or where else to look for
 diagnostic information?

 Thanks,
 Charles




 --
 *Kernighan's Law*
 Debugging is twice as hard as writing the code in the first place.
 Therefore, if you write the code as cleverly as possible, you are, by
 definition, not smart enough to debug it.



Map job not finishing

2014-09-05 Thread Charles Robertson
Hi all,

I'm using oozie to run a hive script, but the map job is not completing.
The tracking page shows its progress as 100%, and there's no warnings or
errors in the logs, it's just sitting there with a state of 'RUNNING'.

As best I can make out from the logs, the last statement in the hive script
has been successfully parsed and it tries to start the command, saying
launching job 1 of 3. That job is sitting there in the ACCEPTED state,
but doing nothing.

This is on a single-node cluster running Hortonworks Data Platform 2.1. Can
anyone suggest what might be the cause, or where else to look for
diagnostic information?

Thanks,
Charles


WebHdfs config problem

2014-08-22 Thread Charles Robertson
Hi all,

I've installed HDP 2.1 on CentOS 6.5, but I'm having a problem with
WebHDFS. When I try to use the file browser or design an oozie workflow in
Hue, I get a WebHdfs error. Attached is the error for the filebrowser.

It appears to be some kind of permissions error, but I have hdfs security
turned off, and web hdfs is enabled.

I've followed all the Hue setup instructions I can find and made sure all
the properties are set correctly.

Can anyone shed some light?

Thanks,
Charles
WebHdfsException at /filebrowser/
HTTPConnectionPool(host='localhost', port=50070): Max retries exceeded with 
url: /webhdfs/v1/user/admin?op=GETFILESTATUSuser.name=huedoas=admin (Caused 
by class 'socket.error': [Errno 111] Connection refused)
Request Method: GET
Request URL:http://[MyIP]:8000/filebrowser/
Django Version: 1.2.3
Exception Type: WebHdfsException
Exception Value:
HTTPConnectionPool(host='localhost', port=50070): Max retries exceeded with 
url: /webhdfs/v1/user/admin?op=GETFILESTATUSuser.name=huedoas=admin (Caused 
by class 'socket.error': [Errno 111] Connection refused)
Exception Location: 
/usr/lib/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py in _stats, line 209
Python Executable:  /usr/bin/python2.6
Python Version: 2.6.6
Python Path:
['/usr/lib/hue/build/env/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg',
 '/usr/lib/hue/build/env/lib/python2.6/site-packages/pip-0.6.3-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/Babel-0.9.6-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/BabelDjango-0.2.2-py2.6.egg',
 '/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/Mako-0.7.2-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/Markdown-2.0.3-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/MarkupSafe-0.9.3-py2.6-linux-x86_64.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/MySQL_python-1.2.3c1-py2.6-linux-x86_64.egg',
 '/usr/lib/hue/build/env/lib/python2.6/site-packages/Paste-1.7.2-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/PyYAML-3.09-py2.6-linux-x86_64.egg',
 '/usr/lib/hue/build/env/lib/python2.6/site-packages/Pygments-1.3.1-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/South-0.7-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/Spawning-0.9.6-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/avro-1.5.0-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/configobj-4.6.0-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/django_auth_ldap-1.0.7-py2.6.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/django_extensions-0.5-py2.6.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/django_nose-0.5-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/elementtree-1.2.6_20050316-py2.6.egg',
 '/usr/lib/hue/build/env/lib/python2.6/site-packages/enum-0.4.4-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/eventlet-0.9.14-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/greenlet-0.3.1-py2.6-linux-x86_64.egg',
 '/usr/lib/hue/build/env/lib/python2.6/site-packages/happybase-0.6-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/kerberos-1.1.1-py2.6-linux-x86_64.egg',
 '/usr/lib/hue/build/env/lib/python2.6/site-packages/lockfile-0.8-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/lxml-3.3.5-py2.6-linux-x86_64.egg',
 '/usr/lib/hue/build/env/lib/python2.6/site-packages/moxy-1.0.0-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/pam-0.1.3-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/pyOpenSSL-0.13-py2.6-linux-x86_64.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/pycrypto-2.6-py2.6-linux-x86_64.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/pysqlite-2.5.5-py2.6-linux-x86_64.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/python_daemon-1.5.1-py2.6.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/python_ldap-2.3.13-py2.6-linux-x86_64.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/pytidylib-0.2.1-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/requests-2.2.1-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/requests_kerberos-0.4-py2.6.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/sasl-0.1.1-py2.6-linux-x86_64.egg',
 '/usr/lib/hue/build/env/lib/python2.6/site-packages/sh-1.08-py2.6.egg', 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/simplejson-2.0.9-py2.6-linux-x86_64.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/threadframe-0.2-py2.6-linux-x86_64.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/thrift-0.9.0-py2.6-linux-x86_64.egg',
 
'/usr/lib/hue/build/env/lib/python2.6/site-packages/urllib2_kerberos-0.1.6-py2.6.egg',
 '/usr/lib/hue/build/env/lib/python2.6/site-packages/xlrd-0.9.0-py2.6.egg',