Hello All,
I have installed Quest Data Connector for Oracle but it is showing error
while importing data using sqoop.
I am able to import the same data from oracle when i disable Quest Data
Connector . I have copied debug logs.
I dint find any blog about resolve this issue.
please let me know
yes , there can be loop in the graph
On Fri, Jun 26, 2015 at 9:09 AM, Harshit Mathur mathursh...@gmail.com
wrote:
Are there loops in your graph?
On Thu, Jun 25, 2015 at 10:39 PM, Ravikant Dindokar
ravikant.i...@gmail.com wrote:
Hi Hadoop user,
I have a file containing one line for each
text file with tab separated values
On Fri, Jun 26, 2015 at 7:35 PM, Krishna Kalyan krishnakaly...@gmail.com
wrote:
What is the file format?.
Thanks,
Krishna
On 26 Jun 2015 7:33 pm, Ravikant Dindokar ravikant.i...@gmail.com
wrote:
Hi Hadoop User,
I am processing 23 GB file on 21 nodes.
You need to show the driver class as well. Are you using textinputformat?
Are you aware that this standard inputformat will take as a value the
complete line (until newline separator), the key in that case is the
bitoffset in the file and definitely not the number you assume it will be.
Hi,
I have this map class that is accepting input files with a key as
LongWritable and a value of Text.
The input file is in [1]. Here we can see that it contains a key as a
Long (I think) and bytes as value.
In [2], it is my map class. The goal of the map class is to read the
input data,
Hi Hadoop User,
I am processing 23 GB file on 21 nodes.
I have tried both options :
mapreduce.job.reduces=50
mapred.tasktracker.reduce.tasks.maximum=5
in mapred-site.xml
but still only one reducer is running.
Any configuration setting still to be corrected?
Thanks
Ravikant
What is the file format?.
Thanks,
Krishna
On 26 Jun 2015 7:33 pm, Ravikant Dindokar ravikant.i...@gmail.com wrote:
Hi Hadoop User,
I am processing 23 GB file on 21 nodes.
I have tried both options :
mapreduce.job.reduces=50
mapred.tasktracker.reduce.tasks.maximum=5
in
HI All,
The region servers failing to start, after Kerberos is enabled, with below
error.
Hadoop -2.6.0
HBase-0.98.4
2015-06-24 15:58:48,884 DEBUG [RS_OPEN_META-mdcthdpdas06lp:60020-0]
regionserver.HRegion: Registered coprocessor service: region=hbase:meta,,1
service=AuthenticationService
Hi,
Is there some kind a security aspect to a container in terms of local
filesystem access?
Is it possible for example to chroot for containers so they won't be able to
read/write to anywhere on the local FS but their own home dir?
Thanks,
Daniel
Can you post the complete stack trace for 'Failed to get FileSystem instance'
?
What's the permission for /apps/hbase/staging ?
Looking at commit log of SecureBulkLoadEndpoint.java, there have been a lot
bug fixes since 0.98.4
Please consider upgrading hbase
Cheers
On Fri, Jun 26, 2015 at
set the value directly in code. Given a JobConf instance job, call
job.setNumReduceTasks(100);
This worked for me
Thanks
On Fri, Jun 26, 2015 at 9:07 PM, Ravikant Dindokar ravikant.i...@gmail.com
wrote:
text file with tab separated values
On Fri, Jun 26, 2015 at 7:35 PM, Krishna Kalyan
The problem can be thought as assigning line number for each line. Is there
any inbuilt functionality in hadoop which can do this?
On Fri, Jun 26, 2015 at 1:11 PM, Ravikant Dindokar ravikant.i...@gmail.com
wrote:
yes , there can be loop in the graph
On Fri, Jun 26, 2015 at 9:09 AM, Harshit
Hello,
I'm running Hadoop 2.6 and I have encountered a problem with the
resourcemanager. After a restart the resourcemanager refuses to start with the
following error:
2015-06-26 08:54:10,342 INFO attempt.RMAppAttemptImpl
(RMAppAttemptImpl.java:recover(796)) - Recovering attempt:
I see 2 issues here which go kind of against the architecture and idea of
M/R (or distributed and parallel programming models.)
1- The map and reduce tasks are suppose to be shared-nothing and
independent tasks. If you add a functionality like this where you need more
sure that some data is
14 matches
Mail list logo