your help !!
--
~~~~~
*Abhishek...*
should
be edited.
It will be a great help if I get any guidance or simialr code to study on
it.
--
Abhishek Gupta
PhD Scholar
IIITM Gwalior
Unsubscribe
Hi Raju,
This could be an issue with the delimiters specified in your csv file. Do
you find any extra new line characters? Please check for line breaks.
I hope this gives some clue.
Regards,
Abhishek Singh
23singhabhis...@gmail.com
On Tue, May 3, 2016 at 2:23 AM, Raju Rathi wrote:
> He
Hi Supreet,
Sorry to digress a bit. But are you looking for something similar the
timeout in GNU?
Check this link below:
http://unix.stackexchange.com/questions/43340/how-to-introduce-timeout-for-shell-scripting
Thanks,
Abhishek Singh
On Thu, Apr 14, 2016 at 2:40 AM, Supreeth wrote:
>
Hi Niranjan,
Glad to hear your issue was resolved. Actually, upgrading was a wise
decision.
Thanks,
Abhishek
On Monday 02 November 2015 03:59 PM, Niranjan Subramanian wrote:
Hi Abhishek,
Thanks for your pointer on letting me know there is an issue from
2.2.0 to 2.5.1, I just upgraded to
Timeout or set higher
socket timeouts in hadoop config.
This is an issue with configuration.
Looking forward to hear from you.
Thanks,
Abhishek Singh
On Sunday 01 November 2015 02:52 PM, Niranjan Subramanian wrote:
Bumping this. Any idea? What's wrong here?
On Fri, Oct 30, 2015 at 9:50 PM,
Hi,
My name is Abhishek Guhathakurta, Hadoop Engineer in Dunnhumby.
I was setting up a 5(five) node Development Cluster, after finishing the
installation and started the cluster, I am stuck with *Hive Metastore
Canary Health BAD.*
*Could you help me solve this issue ?*
And also I have
Thank you Partho, Nagaraj, Sandesh and Kishore. Thanks for your insights.
On Wed, Sep 9, 2015 at 7:25 AM, Nagaraj Chandrashekar <
nchandrashe...@innominds.com> wrote:
> Hello Abhishek,
>
> I think you may find this white paper useful. This document talks about
> offloading Te
connectors which
you are saying, but how about replacing an ETL tool?
Any links would do more than good.
Thanks once again.
Abhishek
On Tue, Sep 8, 2015 at 9:28 AM, Krishna Kishore Bonagiri <
write2kish...@gmail.com> wrote:
> Abhishek,
>
>Are you looking for loading your data into
I appreciate your creativity Publius but it is "Proof of Concept" as
mentioned by Marco. :-)
Thanks!
Regards,
Abhishek Singh
On Jan 4, 2015 1:55 AM, "Marco Shaw" wrote:
> Proof Of Concept
>
>
>
> On Jan 3, 2015, at 4:12 PM, Publius wrote:
>
> Wha
has enough
stackoverflow contents and later choose your own problem domain(Healthcare,
Demographics, Geography, Music, web logs, etc)
This is how I am trying to make myself learn.
Hope it gives you a rough image.
Regards,
Abhishek Singh
On Fri, Jan 2, 2015 at 11:21 PM, Gotomy PC wrote:
> Hi,
&g
!
Regards,
Abhishek Singh
On Dec 28, 2014 3:52 AM, "Anil Jagtap" wrote:
> Dear All,
>
> Just wanted to know if there is a way to copy multiple files using hadoop
> fs -put.
>
> Instead of specifying individual name I provide wild-chars and respective
> files s
Hello,
Please give your insights on the above error ( *Error: Could not find or
load main class org.apache.hadoop.fs.FsShell*) . It was working fine but
now hadoop fs -ls /is throwing this error.
Thanks in adv.
Regards,
Abhishek Singh
me if this isn't the right place to ask.
Thanks in Advance,
Abhishek Singh
On Wed, Dec 3, 2014 at 11:52 AM, adarsh deshratnam <
adarsh.deshrat...@gmail.com> wrote:
> Hi Eric,
> Welcome to the hadoop user list . Its an active list.
>
>
> Thanks,
> Adarsh D
>
>
shown below:
*$ hive --service metastore &*
*$ hive --service hiveserver & *
-Abhishek
On Wed, May 21, 2014 at 4:15 PM, Mohammad Tariq wrote:
> Could you please show me your code?
>
> *Warm regards,*
> *Mohammad Tariq*
> *cloudfront.blogspot.com <http://cloudfront.b
Hi,
While copying file from hdfs file permissions are getting changed, my
assumption was that while copying permissions should be retained. Is this
behavior correct ?
[abhishek@int019 ~]$ hdfs dfs -ls
drwxr-xr-x - abhishek abhishek 0 2013-06-06 10:14 in-dir1
[abhishek@int019
The blog indicates Trevni is giving way to Parquet, and there will be no need
for Trevni any more. Let us know if that is an incorrect interpretation.
- Original Message -
From: "Dmitriy Ryaboy"
To: "pig-u...@hadoop.apache.org"
Sent: Wednesday, March 13, 2013 10:25:04 AM
Subject: R
start
with, but I may be wrong.
-abhishek
From: jeba earnest mailto:jebaearn...@yahoo.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>"
mailto:user@hadoop.apache.org>>, jeba earnest
mailto:jebaearn...@yahoo.com>>
Date: Wednesday,
You will incur the cost of map reduce across all nodes in your cluster anyways.
Not sure, you will get enough speed advantage.
HBase may help you get close to what you are looking for but that wont be map
reduce.
Thanks,
Abhishek
From: jamal sasha [mailto:jamalsha...@gmail.com]
Sent: Monday
Hi Mohammed,
Thanks for sharing, but he already sold it.
Regards
Abhi
Sent from my iPhone
On Oct 18, 2012, at 9:03 PM, Mohamed Riadh Trad wrote:
> Hope it helps,
>
> Bests
>
> Le 19 oct. 2012 à 02:53, Abhishek a écrit :
>
>> Hi all,
>>
>> Sorry for
Hi all,
Sorry for asking this here.Do any one have an extra pass for strata conference,
I would like to Buy one.
Please email me offline if anyone have an extra pass.
Regards
Abhi
Check this out:
http://www.symantec.com/connect/articles/getting-hang-iops-v13#a12
May be this helps. I think their RAID configuration or striping is contributing
to it. Just my guess!
Thanks,
Abhishek
From: Jitendra Kumar Singh [mailto:jksingh26...@gmail.com]
Sent: Thursday, October 18, 2012
A little bit better than plain scraping..use lynx..
You don't have to parse HTML at least.
Thanks,
Abhishek
-Original Message-
From: Patai Sangbutsarakum [mailto:silvianhad...@gmail.com]
Sent: Thursday, October 18, 2012 2:47 PM
To: user@hadoop.apache.org
Subject: i am about to s
; the SAN. Could you please clarify?
Any way you can share your cluster size?
Thanks
Abhishek
i Sent from my iPad with iMstakes
On Oct 18, 2012, at 7:41, "Tom Deutsch"
mailto:tdeut...@us.ibm.com>> wrote:
Agreed Luca, we do this to support existing customers that have requested
hanks
Abhishek
i Sent from my iPad with iMstakes
On Oct 18, 2012, at 6:59, "Michael Segel"
mailto:michael_se...@hotmail.com>> wrote:
I haven't played with a NetApp box, but the way it has been explained to me is
that your SAN appears as if its direct attached storage.
Its
fault-tolerance when its controller(s) fail.
Thanks,
Abhishek
From: Mohamed Riadh Trad [mailto:mohamed.t...@inria.fr]
Sent: Wednesday, October 17, 2012 6:37 AM
To: user@hadoop.apache.org
Subject: Re: HDFS using SAN
Sauvegarde tes données!
Le 17 oct. 2012 à 15:25, Kevin O'dell a écrit :
Yo
Tom
Do you mean you are using GPFS instead of HDFS? Also, if you can share, are you
deploying it as DAS set up or a SAN?
Thanks,
Abhishek
From: Tom Deutsch [mailto:tdeut...@us.ibm.com]
Sent: Wednesday, October 17, 2012 6:31 AM
To: user
Subject: Re: HDFS using SAN
And of source IBM has
mappers.
So what I am gathering is although storing data over SAN is possible for a
Hadoop installation, Map-shuffle-reduce may not be the best way to process data
in that env. Is this conclusion correct?
<3 way Replication and RAID suggestions are great.
Thanks,
Abhishek
From: lo
a SAN for data storage in HDFS, I would love to receive your views.
Thanks,
Abhishek
a SAN for data storage in HDFS, I would love to receive your views.
Thanks,
Abhishek
work.
I am interested to know if anyone else has tried any alternate method to port
weka algorithms on hadoop.
Thanks!
With Regards,
Abhishek S
On Oct 16, 2012, at 7:16 PM, Rajesh Nikam wrote:
> Hi,
>
> I was looking for logistic regression algorithms on hadoop.
> mahout is one goo
Hi Vinutha,
Your name node is not formatted.
Did you try this
Hadoop namenode -format
What are permissions on your
dfs.name.dir directory ?
Regards
Abhi
Sent from my iPhone
On Oct 12, 2012, at 3:30 AM, Vinutha Magal Shreenath
wrote:
> Hello,
>
> I'm just starting out with Hadoop.
>
hi yogesh,
Hope this helps
To remove nodes from the cluster:
1. Add the network addresses of the nodes to be decommissioned to the
exclude file. Do not update the include file at this point.
2. Update the namenode with the new set of permitted datanodes, with
this command:
% hadoop dfsadmin -r
When the input to the mapper is a key,value pair, the key is the byte
offset of the file contents.
So, may be we can check if the file byte offset meets your criterion to do
the mapper task or not.
Thank you!
With Regards,
Abhishek S
On Mon, Sep 10, 2012 at 5:04 PM, Michael Segel wrote
which is halted :) A
detailed answer will be highly appreciated.
Thank you!
With Regards,
Abhishek S
Hi Anand,
What are the permissions, on dfs.name.dir directory in hdfs-site.xml
Regards
Abhishek
Sent from my iPhone
On Aug 9, 2012, at 8:41 AM, anand sharma wrote:
> yea Tariq !1 its a fresh installation i m doing it for the first time, hope
> someone will know the error code a
38 matches
Mail list logo