split to small size, and can balance to all
nodes, thus our spark job can running quickly.
Tks,qihuang.zheng
原始邮件
发件人:Robert colirc...@eventbrite.com
收件人:user@cassandra.apache.orgu...@cassandra.apache.org
发送时间:2015年11月13日(周五) 04:04
主题:Re: Data.db too large and after sstableloader still large
We do snapshot, and found some Data.db too large:
[qihuang.zheng@spark047219 5]$ find . -type f -size +800M -print0 | xargs -0 ls
-lh
-rw-r--r--. 2 qihuang.zheng users 1.5G 10月 28 14:49
./forseti/velocity/forseti-velocity-jb-103631-Data.db
And sstableloader to new cluster, one node has
Original snapshot files:
[qihuang.zheng@spark047219 226_1105]$ ll 2/forseti/velocity/ -h | grep Data
-rw-r--r--. 1 qihuang.zheng users 158M 10月 28 15:03
forseti-velocity-jb-102486-Data.db -rw-r--r--. 1 qihuang.zheng users 161M 10月
28 16:28 forseti-velocity-jb-103911-Data.db -rw-r--r--. 1
kill -9 `cat /var/run/datastax-agent/datastax-agent.pid` \
sudo rm -rf /var/lib/datastax-agent \
sudo rm -rf /usr/share/datastax-agent
qihuang.zheng
原始邮件
发件人:Kai wangdep...@gmail.com
收件人:useru...@cassandra.apache.org
发送时间:2015年11月5日(周四) 04:39
主题:Can't save Opscenter Dashboard
Hi,
Today
-name-test-cluster-configured-name
can start node but has warning:
WARN 16:41:35,824 ClusterName mismatch from /192.168.47.216 cluster_1!=cluster2
in this way also has some problem of nodetool status:
nodetool status at DC1 nodes:
[qihuang.zheng@spark047219 ~]$ /usr/install/cassandra/bin/nodet
We have some nodes Load too large, but some are normal.
[qihuang.zheng@cass047221 forseti]$ /usr/install/cassandra/bin/nodetool status
-- AddressLoadTokens Owns Host ID Rack
UN 192.168.47.221 2.66 TB 256 8.7% 87e100ed-85c4-44cb-9d9f-2d602d016038 RAC1
appen.
I also try use : getLongOption, but this exception still happen.
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md
at first I want to ask issue on spark-case-connector project, but there are no
issues there, so I ask here.
Tks, qihuang.zheng
原始邮件
发
I use nodetool cfstats to see table’s status, and findCompacted partition
maximum bytes: 190G.
Is there anyway to find this largest wide partition row?
[qihuang.zheng@cass047202 cassandra]$ nodetool cfstats forseti.velocity
Keyspace: forseti Read Count: 10470099 Read Latency: 1.3186399419909973
1034430957
I don’t know when decommission will finished. Or does something wrong inside?
just 400G data takes 3 days(and still unfinished) seems abnormal.
Tks, qihuang.zheng
, It’s really fast than just
use java driver api.
But we may meet some problem on producet env, as our spark node deploy totally
different with cassandra nodes.
qihuang.zheng
原始邮件
发件人:DuyHai doandoanduy...@gmail.com
收件人:useru...@cassandra.apache.org
发送时间:2015年10月22日(周四) 19:50
主题:Re: C* Table
(FallthroughRetryPolicy.INSTANCE);
return statement;
}
So that you can set ConsistencyLevel differently for read and write.
Tks, qihuang.zheng
原始邮件
发件人:Ajay gargajaygargn...@gmail.com
收件人:useru...@cassandra.apache.org
发送时间:2015年10月27日(周二) 02:17
主题:Can consistency-levels be different for "read"
I just want to know when StatusLogger will happen, find this question:
http://qnalist.com/questions/4783598/help-on-statuslogger-output
but no one reply it. seems this question ask Mar 2014. no one notice then. I
just pull it out, hope someone answer this.
TKS.
qihuang.zheng
原始邮件
发件人
StatusLogger, I could’t figure out what happen inside in C at that
moment.
which status msg should I care on StatusLogger’s print msg??
Does Pending count important? or should I care aboutMemtable ops,data
qihuang.zheng
原始邮件
发件人:qihuang.zhengqihuang.zh...@fraudmetrix.cn
收件人:useru
All Time Blocked
INFO [ScheduledTasks:1] 2015-10-21 21:09:42,725 StatusLogger.java (line 70)
ReadStage 7 7 644911 0 0
qihuang.zheng
.
3. Survivor's object age counter aways be 1. as those counter=2 promot to Old,
then disappear from survivor.
Plz tell me it’s right.
TKS.
qihuang.zheng
tables, It’s too slow.
After running 20 min, Exception likeNoHostAvailableException happen, offcourse
data did’t sync completed.
And our product env has nearly 25 billion data. which is unacceptble for this
case. It’s there other ways?
Thanks Regards,
qihuang.zheng
原始邮件
发件人:Jeff jirsajeff.ji
an’t fit our situation because our data is two large.
(10Nodes, one nodes has 400G data)
I alos try JavaAPI by query the origin table and then insert into 3 different
splited table.But seems too slow
Any Solution aboult quick data migration?
TKS!!
PS: Cass version: 2.0.15
Thanks Regards,
qihuang.zheng
17 matches
Mail list logo