.com
CUSTID12345 column=CUSTOMER_INFO:NAME,
timestamp=1365600052104, value=Omkar Joshi
CUSTID614 column=CUSTOMER_INFO:NAME,
timestamp=1365601350972, value=Prachi Shah
2 row(s) in 0.8760 seconds
4. The hbase-site.xml has the follow
(HMaster.java:1944)
Regards,
Omkar Joshi
Technology Office
Phone : 022-67954364
-Original Message-
From: Omkar Joshi [mailto:omkar.jo...@lntinfotech.com]
Sent: Monday, April 15, 2013 9:12 AM
To: user@hbase.apache.org
Subject: Unable to connect from windows desktop to HBase
Hi,
I'm t
Hi Azuryy,
Thanks for the reply !
It solved the issue but I'm not clear as to why this is required; also had to
include the Google's protobuf-java-2.4.1.jar whose purpose is unclear to me.
Regards,
Omkar Joshi
-Original Message-
From: Azuryy Yu [mailto:azury...@gmail
Hi Azuryy,
Thanks for the detailed replies !
Regards,
Omkar Joshi
Technology Office
-Original Message-
From: Azuryy Yu [mailto:azury...@gmail.com]
Sent: Monday, April 15, 2013 4:11 PM
To: user@hbase.apache.org
Subject: Re: Unable to connect from windows desktop to HBase
you need to put
se.client.HTable.(HTable.java:133)
at
org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:201)
... 27 more
I disabled, dropped CUSTOMERS and recreated it but the issue is recurring.
Please guide me
Regards,
Omkar Joshi
The contents of this e-mail and any attachme
Hi Ted,
There was a space after address(now feeling like a jackass :( ).
I have another issue but will post in a new thread.
Thanks a lot for the help !
Regards,
Omkar Joshi
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Tuesday, April 16, 2013 11:08 AM
To: user
h the data -
I'm not sure if this is required !
Regards,
Omkar Joshi
The contents of this e-mail and any attachment(s) may contain confidential or
privileged information for the intended recipient(s). Unintended recipients are
prohibited from taking a
${HBASE_HOME}/hbase-0.94.6.1.jar completebulkload
hdfs://cldx-1139-1033:9000/hbase/storefileoutput PRODUCTS
Thanks for the help !
Regards,
Omkar Joshi
-Original Message-
From: Anoop Sam John [mailto:anoo...@huawei.com]
Sent: Tuesday, April 16, 2013 12:26 PM
To: user@hbase.apache.org
here a way wherein a text file can be loaded directly from the local file
system onto HBase?
Regards,
Omkar Joshi
The contents of this e-mail and any attachment(s) may contain confidential or
privileged information for the intended recipient(s). Unintended recipi
Yeah DFS space is a constraint.
I'll check the options specified by you.
Regards,
Omkar Joshi
-Original Message-
From: Suraj Varma [mailto:svarma...@gmail.com]
Sent: Wednesday, April 17, 2013 2:07 PM
To: user@hbase.apache.org
Subject: Re: Loading text files from local file s
nt is " + rowCount);
return rowCount;
}
For CUSTOMERS, the response is acceptable but for PRODUCTS, it is
timing-out(even on the shell 1000851 row(s) in 258.9220 seconds).
What needs to be done to get a response quickly? Approach other than
AggregationClient o
tions on each occasion
you use this website. We will never supply you wi
th substitute goods.Our VAT registration number is 875 5055 01.;
If I don't use any filter, the row that I'm trying to fetch is returned along
with the 1000s of others but as s
n e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
Regards,
Omkar Joshi
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Wednesday, April 17, 2013 6:46 PM
To: user@hbase.apache.org
Cc: user@hbase.apach
re than 6 minutes now :(
What shall I do to speed up the execution to milliseconds(at least a couple of
seconds)?
Regards,
Omkar Joshi
-Original Message-
From: Vedad Kirlic [mailto:kirl...@gmail.com]
Sent: Thursday, April 18, 2013 12:22 AM
To: user@hbase.apache.org
Subject: Re: Speeding u
ource)
Regards,
Omkar Joshi
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Friday, April 19, 2013 3:00 PM
To: user@hbase.apache.org
Cc: user@hbase.apache.org
Subject: Re: Speeding up the row count
Since there is only one region in your table, using aggregation coprocesso
Hi,
There was small issue with the data(delimiters were messed up) - the filters
seem to work correctly.
I'm now working on Hive+HBase integration, Phoenix will be taken up later.
Regards,
Omkar Joshi
-Original Message-
From: Ian Varley [mailto:ivar...@salesforce.com]
Sent: Wedn
lient /172.25.37.135:54432 which had sessionid
0x140e4e97c9d0003
2013-09-03 23:16:30,001 INFO org.apache.zookeeper.server.ZooKeeperServer:
Expiring session 0x140e4e97c9d0003, timeout of 4ms exceeded
What is causing the connection closure? Do I need to increase the time-out
somewhere?
Regards,
luttered, hence,
providing the thread to an external site) is here :
http://stackoverflow.com/questions/18587512/hbase-distributed-mode
Regards,
Omkar Joshi
The contents of this e-mail and any attachment(s) may contain confidential or
privileged information f
HBase 0.94.11 which has zookeeper 3.4.5 jar in the lib.
Regards,
Omkar Joshi
Technology Office
Phone : 022-67954364
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Tuesday, September 03, 2013 7:53 PM
To: user@hbase.apache.org
Subject: Re: Sqoop import to HBase failing
localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
Regards,
Omkar Joshi
-Original Message-
From: jdcry...@gmail.com [mailto:jdcry...@gmail.com] On Behalf Of Jean-Daniel
Cryans
Sent: Tuesday, September 03, 2013 11:48
alendar afterJob = Calendar.getInstance();
System.out
.println("Job Time ended SentimentCalculation "
+ afterJob.getTime());
return 0;
}
}
Regards,
Omkar Joshi
The contents of this e-mail and
in-memory and accessed from there and
not the disk?
2. Is the from-memory or from-disk read transparent to the client? In simple
words, do I need to change the HTable access code in my reducer class? If yes,
what are the changes?
Regards,
Omkar Joshi
Hi JM,
Yes, I have DistributedCache on my mind too but not sure if those tables will
be read-only in future. Besides, I want to check whether with their current
size, those can be kept in-memory in HBase.
Regards,
Omkar Joshi
-Original Message-
From: Jean-Marc Spaggiari [mailto:jean
23 matches
Mail list logo