Beware, HTablePool is not totally thread-safe as well: https://issues.apache.org/jira/browse/HBASE-6651.
On Mon, Feb 4, 2013 at 9:42 PM, Haijia Zhou <leons...@gmail.com> wrote: > Hi, Bing, > > Not sure about your scenario but HTable class is not thread safe for > neither reads nor write. > If you consider writing/reading from a table in a multiple-threaded way, > you can consider using HTablePool. > > Hope it helps > > HJ > > > On Mon, Feb 4, 2013 at 3:32 PM, Bing Li <lbl...@gmail.com> wrote: > > > Dear Ted and Harsh, > > > > I am sorry I didn't keep the exceptions. It occurred many days ago. My > > current version is 0.92. > > > > Now "synchronized" is removed. Is it correct? > > > > I will test if such exceptions are raised. I will let you know. > > > > Thanks! > > > > Best wishes, > > Bing > > > > > > On Tue, Feb 5, 2013 at 4:25 AM, Ted Yu <yuzhih...@gmail.com> wrote: > > > Bing: > > > Use pastebin.com instead of attaching exception report. > > > > > > What version of HBase are you using ? > > > > > > Thanks > > > > > > > > > On Mon, Feb 4, 2013 at 12:21 PM, Harsh J <ha...@cloudera.com> wrote: > > >> > > >> What exceptions do you actually receive - can you send them here? > > >> Knowing that is key to addressing your issue. > > >> > > >> On Tue, Feb 5, 2013 at 1:50 AM, Bing Li <lbl...@gmail.com> wrote: > > >> > Dear all, > > >> > > > >> > When writing data into HBase, sometimes I got exceptions. I guess > they > > >> > might be caused by concurrent writings. But I am not sure. > > >> > > > >> > My question is whether it is necessary to put "synchronized" before > > >> > the writing methods? The following lines are the sample code. > > >> > > > >> > I think the directive, synchronized, must lower the performance of > > >> > writing. Sometimes concurrent writing is needed in my system. > > >> > > > >> > Thanks so much! > > >> > > > >> > Best wishes, > > >> > Bing > > >> > > > >> > public synchronized void AddDomainNodeRanks(String domainKey, int > > >> > timingScale, Map<String, Double> nodeRankMap) > > >> > { > > >> > List<Put> puts = new ArrayList<Put>(); > > >> > Put domainKeyPut; > > >> > Put timingScalePut; > > >> > Put nodeKeyPut; > > >> > Put rankPut; > > >> > > > >> > byte[] domainNodeRankRowKey; > > >> > > > >> > for (Map.Entry<String, Double> nodeRankEntry : > > >> > nodeRankMap.entrySet()) > > >> > { > > >> > domainNodeRankRowKey = > > >> > Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW + > > >> > Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey())); > > >> > > > >> > domainKeyPut = new Put(domainNodeRankRowKey); > > >> > domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY, > > >> > RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN, > > >> > Bytes.toBytes(domainKey)); > > >> > puts.add(domainKeyPut); > > >> > > > >> > timingScalePut = new Put(domainNodeRankRowKey); > > >> > timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY, > > >> > RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN, > > >> > Bytes.toBytes(timingScale)); > > >> > puts.add(timingScalePut); > > >> > > > >> > nodeKeyPut = new Put(domainNodeRankRowKey); > > >> > nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY, > > >> > RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN, > > >> > Bytes.toBytes(nodeRankEntry.getKey())); > > >> > puts.add(nodeKeyPut); > > >> > > > >> > rankPut = new Put(domainNodeRankRowKey); > > >> > rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY, > > >> > RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN, > > >> > Bytes.toBytes(nodeRankEntry.getValue())); > > >> > puts.add(rankPut); > > >> > } > > >> > > > >> > try > > >> > { > > >> > this.rankTable.put(puts); > > >> > } > > >> > catch (IOException e) > > >> > { > > >> > e.printStackTrace(); > > >> > } > > >> > } > > >> > > >> > > >> > > >> -- > > >> Harsh J > > > > > > > > > -- Adrien Mogenet 06.59.16.64.22 http://www.mogenet.me