Hi.So far so good, after changing the file descriptors and dfs.datanode.socket.write.timeout, dfs.datanode.max.xcievers my cluster works stable.
Thank You and Best Regards. P.S. Regarding deleting multiple columns missing functionality i filled jira : https://issues.apache.org/jira/browse/HBASE-961 On Sun, Oct 26, 2008 at 12:58 AM, Michael Stack <[EMAIL PROTECTED]> wrote: > Slava Gorelik wrote: > >> Hi.Haven't tried yet them, i'll try tomorrow morning. In general cluster >> is >> working well, the problems begins if i'm trying to add 10M rows, after >> 1.2M >> if happened. >> > Anything else running beside the regionserver or datanodes that would suck > resources? When datanodes begin to slow, we begin to see the issue > Jean-Adrien's configurations address. Are you uploading using MapReduce? > Are TTs running on same nodes as the datanode and regionserver? How are > you doing the upload? Describe what your uploader looks like (Sorry if > you've already done this). > > I already changed the limit of files descriptors, >> > Good. > > I'll try >> to change the properties: >> <property> <name>dfs.datanode.socket.write.timeout</name> >> <value>0</value> >> </property> >> >> <property> >> <name>dfs.datanode.max.xcievers</name> >> <value>1023</value> >> </property> >> >> >> > Yeah, try it. > > And let you know, is any other prescriptions ? Did i miss something ? >> >> BTW, off topic, but i sent e-mail recently to the list and i can't see it: >> Is it possible to delete multiple columns in any way by regex : for >> example >> colum_name_* ? >> >> > Not that I know of. If its not in the API, it should be. Mind filing a > JIRA? > > Thanks Slava. > St.Ack >
