Hi Amjad,

Is there any reason you can not upgrade to hadoop 2.0? Hadoop 2.0 has made many 
improvements over 1.X versions and they are source compatible so any of your MR 
jobs will be unaffected as long as you recompile with 2.x.

The code we pointed at assumes that all the classes for hadoop 2.X are present 
in your class path. if you are not using maven or some other build system and 
would like to add jars manually you probably will have tough time resolving 
conflicts so I would advise against it.
If you still want to add jars manually my best guess would be to look under
<YOUR_HADOO_INSTALLATION_DIR>/libexec/share/hadoop/

Thanks
Parth
On Jul 18, 2014, at 10:56 AM, amjad khan <amjadkhan987...@gmail.com> wrote:

> Thanks for your reply taylor. I'm using hadoop1.0.2. Can u suggest me any 
> alternative to connect to hadoop.
> 
> 
> 
> On Fri, Jul 18, 2014 at 8:45 AM, P. Taylor Goetz <ptgo...@gmail.com> wrote:
> What version of Hadoop are you using? Storm-hdfs requires Hadoop 2.x.
> 
> - Taylor
> 
> On Jul 18, 2014, at 6:07 AM, amjad khan <amjadkhan987...@gmail.com> wrote:
> 
>> Thanks for your help parth 
>> 
>> When i trying to run the topology to write the data to hdfs it throws 
>> exception Class Not Found: 
>> org.apache.hadoop.client.hdfs.HDFSDataOutputStream$SyncFlags
>> Can anyone tell me what are the jars needed to execute the code to write 
>> data to hdfs. Please tell me all the required jars. 
>> 
>> 
>> On Wed, Jul 16, 2014 at 10:46 AM, Parth Brahmbhatt 
>> <pbrahmbh...@hortonworks.com> wrote:
>> You can use 
>> 
>> https://github.com/ptgoetz/storm-hdfs
>> 
>> It supports writing to HDFS with both Storm bolts and trident states. 
>> Thanks
>> Parth
>> 
>> On Jul 16, 2014, at 10:41 AM, amjad khan <amjadkhan987...@gmail.com> wrote:
>> 
>>> Can anyone provide the code for bolt to write its data to hdfs. Kindly tell 
>>> me the jar's required to run that bolt.
>>> 
>>> 
>>> On Mon, Jul 14, 2014 at 2:33 PM, Max Evers <mcev...@gmail.com> wrote:
>>> Can you expand on your use case? What is the query selecting on? Is the 
>>> column you are querying on indexed?  Do you really need to look at the 
>>> entire 20 gb every 20ms?
>>> 
>>> On Jul 14, 2014 6:39 AM, "amjad khan" <amjadkhan987...@gmail.com> wrote:
>>> I made a storm topoogy where spout was fetching data from mysql using 
>>> select query. The select query was fired after every 30 msec but because 
>>> the size of the table is more than 20 GB the select query takes more than 
>>> 10 sec to execute therefore this is not working. I need to know what are 
>>> the possible alternatives for this situation. Kindly reply as soon as 
>>> possible.
>>> 
>>> Thanks, 
>>> 
>> 
>> 
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to 
>> which it is addressed and may contain information that is confidential, 
>> privileged and exempt from disclosure under applicable law. If the reader of 
>> this message is not the intended recipient, you are hereby notified that any 
>> printing, copying, dissemination, distribution, disclosure or forwarding of 
>> this communication is strictly prohibited. If you have received this 
>> communication in error, please contact the sender immediately and delete it 
>> from your system. Thank You.
>> 
> 
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Reply via email to