Thanks,
I made the changes and everything works fine!! Many thanks!!
Now I am having problems converting BSONWritable to BSONObject and vice versa.
Is there an automatic way to make it?
Or should I write myself a parse?
And regarding the tests on windows, any experience?
Thanks again!!
Best r
Hi Expert,
Below are my steps and is it a hadoop bug or did I miss any thing? Thanks!
Step:
[A] Upgrade
1. Install Hadoop 2.2.0 cluster
2. Stop Hadoop services
3. Replace 2.2.0 binaries with 2.4.1 binaries
4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
5. Start namenode wit
I observed in Yarn Cluster, you set these properties
yarn.resourcemanager.hostname.rm-id1
yarn.resourcemanager.hostname.rm-id2
not yarn.resourcemanager.hostname.
On 9/17/14, Matt Narrell wrote:
> How do I configure the “yarn.resourcemanager.hostname” property when in an
> HA configuration?
>
Dear All,
Recently I observe that whenever Hadoop archive is running, all HBase insert
(put) will slow down, from average 0.001 to 0.023, that is 23x times slower.
I wonder if there a way to slow down the archive process? Like restrict read /
write buffer?
One of the restriction already implem
I don't see any hs_err file in the current directory , so I don't think
that is the case.
On Wed, Sep 17, 2014 at 2:21 PM, S.L wrote:
>
> I am not sure, I running a sequence of MRV1 jobs using a bash script ,
> this seems to happen in the 4th iteration consistently, how do I confirm
> this possi
I am not sure, I running a sequence of MRV1 jobs using a bash script , this
seems to happen in the 4th iteration consistently, how do I confirm this
possibility?
On Wed, Sep 17, 2014 at 1:34 PM, Vinod Kumar Vavilapalli wrote:
> Is it possible that the client JVM is somehow getting killed while t
Is it possible that the client JVM is somehow getting killed while the YARN
application finishes as usual on the cluster in the background?
+Vinod
On Wed, Sep 17, 2014 at 9:29 AM, S.L wrote:
>
>Hi All,
>
> I am running a MRV1 job on Hadoop YARN 2.3.0 cluster , the problem is when
> I submit
Hi All,
I am running a MRV1 job on Hadoop YARN 2.3.0 cluster , the problem is when
I submit this job YARN the application that is running in YARN is marked as
complete even as on console its reported as only 58% complete . I have
confirmed that its also not printing the log statements that its sup
How do I configure the “yarn.resourcemanager.hostname” property when in an HA
configuration?
It seems that this property will configure how the UI knits together the
node/application/timeline/etc UIs into a seamless experience. The issue I come
across is that this property seems to only accept
You String as the outputKey (). java.lang.String is not Writable. Change it
to Text just like you did for the Mapper.
Regards,
Shahab
On Wed, Sep 17, 2014 at 10:43 AM, Blanca Hernandez <
blanca.hernan...@willhaben.at> wrote:
> Thanks for answering:
>
>
>
> hadoop jar /tmp/hadoop-test.jar
> at.w
Thanks for answering:
hadoop jar /tmp/hadoop-test.jar at.willhaben.hadoop.AveragePriceCalculationJob
In the AveragePriceCalculationJob I have my configuration:
private static class AveragePriceCalculationJob extends MongoTool {
private AveragePriceCalculationJob(AveragePriceNode current
Can you provide the driver code for this job?
Regards,
Shahab
On Wed, Sep 17, 2014 at 10:28 AM, Blanca Hernandez <
blanca.hernan...@willhaben.at> wrote:
> Hi again, I changed the String objects with org.apache.hadoop.io.Text
> objects (why is String not accepted?), and now I get another excepti
Hi again, I changed the String objects with org.apache.hadoop.io.Text objects
(why is String not accepted?), and now I get another exception, so I don´t
really know if I solved something or I broke something:
java.lang.Exception: java.lang.NullPointerException
at
org.apache.hadoop.mapr
Hi!
I am getting some CCE and don´t really understand why...
Here my mapper:
public class AveragePriceMapper extends Mapper{
@Override
public void map(final String key, final BSONObject val, final Context
context) throws IOException, InterruptedException {
String id = "result_of
Hello there,
I’m wondering if anyone knows how to move tables in hbase 0.90 to hbase 0.98? I
did export on hbase 0.90 and import to hbase 0.98. However it throws exception
like
java.lang.Exception: java.io.IOException: keyvalues=NONE read 2 bytes, should
read 143121
at
org.apache.hado
virtualbox is known for causing instabilities in the host-kernel (or at
least, it used to). You might be better off asking for support there:
https://www.virtualbox.org/wiki/Bugtracker
- André
On Wed, Sep 17, 2014 at 4:25 AM, Li Li wrote:
> hi all,
> I know it's not a problem related to had
Hi all,
Can someone clarify typical scenarios of HDFS and HBase? Since Hive
provides SQL-like query, does it mean Hive(bundled with HDFS) can replace
HBase just for query? Also, if I want to do visualization and data
mining, which kind of access pattern is better, HDFS or HBase?
Thank you.
Ch
Hi Yusaku,
Thank you for your reply, unfortunately that's not the problem - the new
slave node has a single drive, so is using the default data directory path.
I'll post this to the Ambari list.
Regards,
Charles
On 16 September 2014 23:12, Yusaku Sako wrote:
> Charles,
>
> If the newly added s
18 matches
Mail list logo