RE: Unsubscribe

2015-04-09 Thread Liaw, Huat (MTO)
What do you unsubscribe?

From: Rajeev Yadav [mailto:rajeya...@gmail.com]
Sent: April 9, 2015 1:02 PM
To: user@hadoop.apache.org
Subject: Unsubscribe

Unsubscribe

--
Warm Regards,
Rajeev Yadav



Unsubscribe

2015-04-09 Thread Rajeev Yadav
Unsubscribe

-- 
*Warm Regards,*
Rajeev Yadav


Unsubscribe

2015-04-09 Thread Ram



Not able to run more than one map task

2015-04-09 Thread Amit Kumar
Hi All,
We recently started working on Hadoop. We have setup the hadoop in pseduo 
distribution mode along with oozie. 
Every developer has set it up on his laptop. The problem is that we are not 
able to run more than one map task concurrently on our laptops. Resource 
manager is not allowing more than one task on our machine.  
My task gets completed if I submit it without Oozie. Oozie requires one map 
task for its own functioning. Actual task that oozie submit does not start.
Here is my configuration
-- Hadoop setup in Pseudo distribution mode -- Hadoop Version - 2.6-- Oozie 
Version - 4.0.1
Regards,Amit
  

Re: Unable to load file from local to HDFS cluster

2015-04-09 Thread sandeep vura
Hi Yanghaogn,

Sure, We couldn't able to load the file from local to HDFS. Its getting
exception DFSOutputStream connection refused,which means packets are not
receiving properly from namenode to datanodes .However,if we start clusters
our datanodes are not starting properly and getting connection closed
exception.

Our Hadoop WebUI also opening very slow ,ssh connection also very slow.Then
finally we have changed our network ports and checked the performance of
the cluster it works good.

Issue was fixed in Namenode server network port.

Regards,
Sandeep.v


On Thu, Apr 9, 2015 at 12:30 PM, 杨浩  wrote:

> Root cause: Network related issue?
> can you tell us more detailedly? Thank you
>
> 2015-04-09 13:51 GMT+08:00 sandeep vura :
>
>> Our issue has been resolved.
>>
>> Root cause: Network related issue.
>>
>> Thanks for each and everyone spent sometime and replied to my questions.
>>
>> Regards,
>> Sandeep.v
>>
>> On Thu, Apr 9, 2015 at 10:45 AM, sandeep vura 
>> wrote:
>>
>>> Can anyone give solution for my issue?
>>>
>>> On Thu, Apr 9, 2015 at 12:48 AM, sandeep vura 
>>> wrote:
>>>
 Exactly but every time it picks randomly. Our datanodes are
 192.168.2.81,192.168.2.82,192.168.2.83,192.168.2.84,192.168.2.85

 Namenode  : 192.168.2.80

 If i restarts the cluster next time it will show 192.168.2.81:50010
 connection closed

 On Thu, Apr 9, 2015 at 12:28 AM, Liaw, Huat (MTO) >>> > wrote:

>  You can not start 192.168.2.84:50010…. closed by ((192.168.2.x
> -datanode))
>
>
>
> *From:* sandeep vura [mailto:sandeepv...@gmail.com]
> *Sent:* April 8, 2015 2:39 PM
>
> *To:* user@hadoop.apache.org
> *Subject:* Re: Unable to load file from local to HDFS cluster
>
>
>
> We are using this setup from a very long time.We are able to run all
> the jobs successfully but suddenly went wrong with namenode.
>
>
>
> On Thu, Apr 9, 2015 at 12:06 AM, sandeep vura 
> wrote:
>
> I have also noticed another issue when starting hadoop cluster
> start-all.sh command
>
>
>
> namenode and datanode daemons are starting.But sometimes one of the
> datanode would drop the connection and it shows the message connection
> closed by ((192.168.2.x -datanode)) everytime when it restart the hadoop
> cluster datanode will keeps changing .
>
>
>
> for example 1st time when i starts hadoop cluster - 192.168.2.1 -
> connection closed
>
> 2nd time when i starts hadoop cluster - 192.168.2.2-connection closed
> .This point again 192.168.2.1 will starts successfuly without any errors.
>
>
>
> I couldn't able to figure out the issue exactly.Is issue relates to
> network or Hadoop configuration.
>
>
>
>
>
>
>
> On Wed, Apr 8, 2015 at 11:54 PM, Liaw, Huat (MTO) <
> huat.l...@ontario.ca> wrote:
>
> hadoop fs -put   Copy from remote location to
> HDFS
>
>
>
> *From:* sandeep vura [mailto:sandeepv...@gmail.com]
> *Sent:* April 8, 2015 2:24 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Unable to load file from local to HDFS cluster
>
>
>
> Sorry Liaw,I tried same command but its didn't resolve.
>
>
>
> Regards,
>
> Sandeep.V
>
>
>
> On Wed, Apr 8, 2015 at 11:37 PM, Liaw, Huat (MTO) <
> huat.l...@ontario.ca> wrote:
>
> Should be hadoop dfs -put
>
>
>
> *From:* sandeep vura [mailto:sandeepv...@gmail.com]
> *Sent:* April 8, 2015 1:53 PM
> *To:* user@hadoop.apache.org
> *Subject:* Unable to load file from local to HDFS cluster
>
>
>
> Hi,
>
>
>
> When loading a file from local to HDFS cluster using the below command
>
>
>
> hadoop fs -put sales.txt /sales_dept.
>
>
>
> Getting the following exception.Please let me know how to resolve this
> issue asap.Please find the attached is the logs that is displaying on
> namenode.
>
>
>
> Regards,
>
> Sandeep.v
>
>
>
>
>
>
>


>>>
>>
>


RE: cassandra + tableau

2015-04-09 Thread Mich Talebzadeh
Tableau provides generic ODBC driver connection. There is one available in the 
latest version of Tableau (8.2?) for Hive. You can try ODBC driver connection 
or reach out to Progress software https://www.progress.com/ if they have such 
driver.

 

They adapted/customised one for us specifically for Oracle TimesTen. Remember 
Tableau is 2.3 compliant

 

HTH

 

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", 
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 
978-0-9759693-0-4

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and 
Coherence Cache

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one 
out shortly

 

NOTE: The information in this email is proprietary and confidential. This 
message is for the designated recipient only, if you are not the intended 
recipient, you should destroy it immediately. Any information in this message 
shall not be understood as given or endorsed by Peridale Ltd, its subsidiaries 
or their employees, unless expressly so stated. It is the responsibility of the 
recipient to ensure that this email is virus free, therefore neither Peridale 
Ltd, its subsidiaries nor their employees accept any responsibility.

 

From: siva kumar [mailto:siva165...@gmail.com] 
Sent: 09 April 2015 11:13
To: user@hadoop.apache.org
Subject: cassandra + tableau

 

Hi Folks,

  Can anyone suggest me the open source connector to connect 
tableau to cassandra database. ?

 

 

Thanks,

sivakumar.c



RE: cassandra + tableau

2015-04-09 Thread Ravi Shankar
Look at facebook’s Presto.

 

Best regards,

Nair

 

From: siva kumar [mailto:siva165...@gmail.com] 
Sent: Thursday, April 9, 2015 6:13 AM
To: user@hadoop.apache.org
Subject: cassandra + tableau

 

Hi Folks,

  Can anyone suggest me the open source connector to connect 
tableau to cassandra database. ?

 

 

Thanks,

sivakumar.c



cassandra + tableau

2015-04-09 Thread siva kumar
Hi Folks,
  Can anyone suggest me the open source connector to connect
tableau to cassandra database. ?


Thanks,
sivakumar.c


Re: Unable to load file from local to HDFS cluster

2015-04-09 Thread 杨浩
Root cause: Network related issue?
can you tell us more detailedly? Thank you

2015-04-09 13:51 GMT+08:00 sandeep vura :

> Our issue has been resolved.
>
> Root cause: Network related issue.
>
> Thanks for each and everyone spent sometime and replied to my questions.
>
> Regards,
> Sandeep.v
>
> On Thu, Apr 9, 2015 at 10:45 AM, sandeep vura 
> wrote:
>
>> Can anyone give solution for my issue?
>>
>> On Thu, Apr 9, 2015 at 12:48 AM, sandeep vura 
>> wrote:
>>
>>> Exactly but every time it picks randomly. Our datanodes are
>>> 192.168.2.81,192.168.2.82,192.168.2.83,192.168.2.84,192.168.2.85
>>>
>>> Namenode  : 192.168.2.80
>>>
>>> If i restarts the cluster next time it will show 192.168.2.81:50010
>>> connection closed
>>>
>>> On Thu, Apr 9, 2015 at 12:28 AM, Liaw, Huat (MTO) 
>>> wrote:
>>>
  You can not start 192.168.2.84:50010…. closed by ((192.168.2.x
 -datanode))



 *From:* sandeep vura [mailto:sandeepv...@gmail.com]
 *Sent:* April 8, 2015 2:39 PM

 *To:* user@hadoop.apache.org
 *Subject:* Re: Unable to load file from local to HDFS cluster



 We are using this setup from a very long time.We are able to run all
 the jobs successfully but suddenly went wrong with namenode.



 On Thu, Apr 9, 2015 at 12:06 AM, sandeep vura 
 wrote:

 I have also noticed another issue when starting hadoop cluster
 start-all.sh command



 namenode and datanode daemons are starting.But sometimes one of the
 datanode would drop the connection and it shows the message connection
 closed by ((192.168.2.x -datanode)) everytime when it restart the hadoop
 cluster datanode will keeps changing .



 for example 1st time when i starts hadoop cluster - 192.168.2.1 -
 connection closed

 2nd time when i starts hadoop cluster - 192.168.2.2-connection closed
 .This point again 192.168.2.1 will starts successfuly without any errors.



 I couldn't able to figure out the issue exactly.Is issue relates to
 network or Hadoop configuration.







 On Wed, Apr 8, 2015 at 11:54 PM, Liaw, Huat (MTO) 
 wrote:

 hadoop fs -put   Copy from remote location to HDFS



 *From:* sandeep vura [mailto:sandeepv...@gmail.com]
 *Sent:* April 8, 2015 2:24 PM
 *To:* user@hadoop.apache.org
 *Subject:* Re: Unable to load file from local to HDFS cluster



 Sorry Liaw,I tried same command but its didn't resolve.



 Regards,

 Sandeep.V



 On Wed, Apr 8, 2015 at 11:37 PM, Liaw, Huat (MTO) 
 wrote:

 Should be hadoop dfs -put



 *From:* sandeep vura [mailto:sandeepv...@gmail.com]
 *Sent:* April 8, 2015 1:53 PM
 *To:* user@hadoop.apache.org
 *Subject:* Unable to load file from local to HDFS cluster



 Hi,



 When loading a file from local to HDFS cluster using the below command



 hadoop fs -put sales.txt /sales_dept.



 Getting the following exception.Please let me know how to resolve this
 issue asap.Please find the attached is the logs that is displaying on
 namenode.



 Regards,

 Sandeep.v







>>>
>>>
>>
>