Re: Repair hanges on 1.1.4
Hi Aaron, Thank you for your feedback. I have also installed DataStax OPS center and its nothing shows progress of repair. Previously every repair progress also shown on OPS center and once it 100%, reapir also completed on nodes. but now reapir is in progress on node but OPS center nothing shows. Secondly please find netstats and compactionstats results as under; # /opt/apache-cassandra-1.1.4/bin/nodetool -h localhost netstats Mode: NORMAL Not sending any streams. Not receiving any streams. Pool NameActive Pending Completed Commandsn/a 05327870 Responses n/a 0 163271943 # /opt/apache-cassandra-1.1.4/bin/nodetool -h localhost compactionstats pending tasks: 0 Active compaction remaining time :n/a Regards, Adeel Akbar Quoting aaron morton : The errors from Hints are not concerned with repair. Increasing the rpc_timeout may help with those. If it's logging about 0 hints you may be seeing this https://issues.apache.org/jira/browse/CASSANDRA-5068 How did repair hang ? Check for progress with nodetool compactionstats and nodetool netstats. Cheers - Aaron Morton Freelance Cassandra Consultant New Zealand @aaronmorton http://www.thelastpickle.com On 13/04/2013, at 3:01 AM, Alexis Rodríguez wrote: Adeel, It may be a problem in the remote node, could you check the system.log? Also you might want to check the rpc_timeout_in_ms in both nodes, maybe an increase in this parameter helps. On Fri, Apr 12, 2013 at 9:17 AM, wrote: Hi, I have started repair on newly added node with -pr and this nodes exist on another data center. I have 5MB internet connection and configured setstreamthroughput 1. After some time repair goes hang and following meesage found in logs; # /opt/apache-cassandra-1.1.4/bin/nodetool -h localhost ring Address DC RackStatus State Load Effective-Ownership Token 169417178424467235000914166253263322299 10.0.0.3DC1 RAC1Up Normal 93.26 GB 66.67% 0 10.0.0.4DC1 RAC1Up Normal 89.1 GB 66.67% 56713727820156410577229101238628035242 10.0.0.15 DC1 RAC1Up Normal 72.87 GB 66.67% 113427455640312821154458202477256070484 10.40.1.103 DC2 RAC1Up Normal 48.59 GB 100.00% 169417178424467235000914166253263322299 INFO [HintedHandoff:1] 2013-04-12 17:05:49,411 HintedHandOffManager.java (line 372) Timed out replaying hints to /10.40.1.103; aborting further deliveries INFO [HintedHandoff:1] 2013-04-12 17:05:49,411 HintedHandOffManager.java (line 390) Finished hinted handoff of 0 rows to endpoint /10.40.1.103 Why we getting this message and how I prevent repair from this error. Regards, Adeel Akbar
Repair hanges on 1.1.4
Hi, I have started repair on newly added node with -pr and this nodes exist on another data center. I have 5MB internet connection and configured setstreamthroughput 1. After some time repair goes hang and following meesage found in logs; # /opt/apache-cassandra-1.1.4/bin/nodetool -h localhost ring Address DC RackStatus State Load Effective-Ownership Token 169417178424467235000914166253263322299 10.0.0.3DC1 RAC1Up Normal 93.26 GB 66.67% 0 10.0.0.4DC1 RAC1Up Normal 89.1 GB 66.67% 56713727820156410577229101238628035242 10.0.0.15 DC1 RAC1Up Normal 72.87 GB 66.67% 113427455640312821154458202477256070484 10.40.1.103 DC2 RAC1Up Normal 48.59 GB 100.00% 169417178424467235000914166253263322299 INFO [HintedHandoff:1] 2013-04-12 17:05:49,411 HintedHandOffManager.java (line 372) Timed out replaying hints to /10.40.1.103; aborting further deliveries INFO [HintedHandoff:1] 2013-04-12 17:05:49,411 HintedHandOffManager.java (line 390) Finished hinted handoff of 0 rows to endpoint /10.40.1.103 Why we getting this message and how I prevent repair from this error. Regards, Adeel Akbar
Re: Cassandra services down frequently [Version 1.1.4]
Thank you Aaron and Bryan for your advice. I have changed following parameters and now Cassandra running absolutely fine. Please review below setting and advice am I right or right direction. cassandra-env.sh #JVM_OPTS="$JVM_OPTS -ea" MAX_HEAP_SIZE="6G" HEAP_NEWSIZE="500M" cassandra.yaml # do not persist caches to disk key_cache_save_period: 0 row_cache_save_period: 0 key_cache_size_in_mb: 512 row_cache_size_in_mb: 14336 row_cache_provider: SerializingCacheProvider I have a querry, if Cassandra is using JVM for all operations then why we need change above parameters separately in cassandra.yaml. Thanks & Regards Adeel Akbar Quoting aaron morton : We can see from below that you've tweaked and disabled many of the memory "safety valve" and other memory related settings. Agree. Also you are running with JVM heap size of 3.81GB which is non default. For a 16GB node I would expect 8GB. Try restoring the yaml values to the defaults and allowing the cassandra-env.sh file to determine the memory size. Cheers - Aaron Morton Freelance Cassandra Consultant New Zealand @aaronmorton http://www.thelastpickle.com On 5/04/2013, at 12:36 PM, Bryan Talbot wrote: On Thu, Apr 4, 2013 at 1:27 AM, wrote: After some time (1 hour / 2 hour) cassandra shut services on one or two nodes with follwoing errors; Wonder what the workload and schema is like ... We can see from below that you've tweaked and disabled many of the memory "safety valve" and other memory related settings. Those could be causing issues too. hinted_handoff_throttle_delay_in_ms: 0 flush_largest_memtables_at: 1.0 reduce_cache_sizes_at: 1.0 reduce_cache_capacity_to: 0.6 rpc_keepalive: true rpc_server_type: sync rpc_min_threads: 16 rpc_max_threads: 2147483647 in_memory_compaction_limit_in_mb: 256 compaction_throughput_mb_per_sec: 16 rpc_timeout_in_ms: 15000 dynamic_snitch_badness_threshold: 0.0
Cassandra services down frequently [Version 1.1.4]
irectories: - /u/cassandra/data commitlog_directory: /var/log/cassandra/commitlog key_cache_size_in_mb: key_cache_save_period: 14400 row_cache_size_in_mb: 0 row_cache_save_period: 0 row_cache_provider: SerializingCacheProvider saved_caches_directory: /var/log/cassandra/saved_caches commitlog_sync: periodic commitlog_sync_period_in_ms: 1 commitlog_segment_size_in_mb: 32 seed_provider: # Ex: ",," - seeds: "10.0.0.3,10.0.0.4" flush_largest_memtables_at: 1.0 reduce_cache_sizes_at: 1.0 reduce_cache_capacity_to: 0.6 concurrent_reads: 8 concurrent_writes: 32 memtable_flush_queue_size: 4 trickle_fsync: false trickle_fsync_interval_in_kb: 10240 storage_port: 7000 ssl_storage_port: 7001 listen_address: 10.0.0.3 rpc_address: 10.0.0.3 rpc_port: 9160 rpc_keepalive: true rpc_server_type: sync rpc_min_threads: 16 rpc_max_threads: 2147483647 thrift_framed_transport_size_in_mb: 15 thrift_max_message_length_in_mb: 16 incremental_backups: false snapshot_before_compaction: false auto_snapshot: true column_index_size_in_kb: 64 in_memory_compaction_limit_in_mb: 256 multithreaded_compaction: false compaction_throughput_mb_per_sec: 16 compaction_preheat_key_cache: true rpc_timeout_in_ms: 15000 phi_convict_threshold: 8 endpoint_snitch: org.apache.cassandra.locator.PropertyFileSnitch dynamic_snitch_update_interval_in_ms: 100 dynamic_snitch_reset_interval_in_ms: 60 dynamic_snitch_badness_threshold: 0.0 request_scheduler: org.apache.cassandra.scheduler.NoScheduler index_interval: 128 encryption_options: internode_encryption: none keystore: conf/.keystore keystore_password: cassandra truststore: conf/.truststore truststore_password: cassandra Please help me to fix this issue permanently and smooth running of Cassandra nodes. Regards, Adeel Akbar
Multiple Data Center Clusters on Cassandra
Hi, I am running 3 nodes cassandra cluster with replica factor 2 in one DC. Now I need to run multiple data center clusters with cassandra and I have following queries; 1. I want to replicate whole data on another DC and after that both DC's nodes should have complete Data. In which topology is it possible ? 2. If I need backup, what's the command of cluster screen shot? 3. I will use internet connection with VPN facility for traffic and in case disconnection what will happen? Regards, Adeel
Re: Starting Cassandra
Hi, Please check java version with (java -version) command and install java 7 to resolve this issue. Regards, Adeel Akbar Quoting "Sloot, Hans-Peter" : Hello, Can someone help me out? I have installed Cassandra enterprise and followed the cookbook - Configured the cassandra.yaml file - Configured the cassandra-topoloy.properties file But when I try to start the cluster with 'service dse start' nothing starts. With cassandra -f I get: /usr/sbin/cassandra -f xss = -ea -javaagent:/lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms495M -Xmx495M -Xmn100M -XX:+HeapDumpOnOutOfMemoryError -Xss180k Segmentation fault The command cassandra -v I get : xss = -ea -javaagent:/lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms495M -Xmx495M -Xmn100M -XX:+HeapDumpOnOutOfMemoryError -Xss180k 1.1.6-dse-p1 Regards Hans-Peter Dit bericht is vertrouwelijk en kan geheime informatie bevatten enkel bestemd voor de geadresseerde. Indien dit bericht niet voor u is bestemd, verzoeken wij u dit onmiddellijk aan ons te melden en het bericht te vernietigen. Aangezien de integriteit van het bericht niet veilig gesteld is middels verzending via internet, kan Atos Nederland B.V. niet aansprakelijk worden gehouden voor de inhoud daarvan. Hoewel wij ons inspannen een virusvrij netwerk te hanteren, geven wij geen enkele garantie dat dit bericht virusvrij is, noch aanvaarden wij enige aansprakelijkheid voor de mogelijke aanwezigheid van een virus in dit bericht. Op al onze rechtsverhoudingen, aanbiedingen en overeenkomsten waaronder Atos Nederland B.V. goederen en/of diensten levert zijn met uitsluiting van alle andere voorwaarden de Leveringsvoorwaarden van Atos Nederland B.V. van toepassing. Deze worden u op aanvraag direct kosteloos toegezonden. This e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Atos Nederland B.V. group liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted. On all offers and agreements under which Atos Nederland B.V. supplies goods and/or services of whatever nature, the Terms of Delivery from Atos Nederland B.V. exclusively apply. The Terms of Delivery shall be promptly submitted to you on your request. Atos Nederland B.V. / Utrecht KvK Utrecht 30132762
Adding New Node to an Existing Cluster
Hi, I am using Cassandra Cluster 1.1.4 with two nodes alongwith REPLICA factor 2. I have added one new node in my existing Cassandra Cluster with instruction provided on http://www.datastax.com/docs/1.1/operations/cluster_management. Now Its only showing 285 MB data instead of 250GB +. Please let me know, can we required to execute any command to balance the data in all three nodes. # /opt/apache-cassandra-1.1.4/bin/nodetool -h localhost ring *Address DC RackStatus State Load Effective-Ownership Token * 111379633042792501120498209932601771854 XX.XX.XX.C DC1 RAC1Up Normal *286.58 MB * 55.76% 16631224681855479515247230241845664688 XX.XX.XX.B DC1 RAC1Up Normal 278.23 GB 88.55% 91902851206288351623775585543017122534 XX.XX.XX.A DC1 RAC1Up Normal 275.85 GB 55.69% 111379633042792501120498209932601771854 Regards, *Adeel Akbar*
Nodes not synced
Hi, I have setup 2 nodes cluster with Replica factor 2. I have restored snapshot of another cluster on Node A and restarted cassandra process. Node B stil not get any update/data from Node A. Do we need to execute any command to sync both nodes? # /opt/apache-cassandra-1.1.4/bin/nodetool -h localhost ring Address DC RackStatus State Load Effective-Ownership Token 91902851206288351623775585543017122534 XX.XX.XX.XXA 0 0 Up Normal *264.98 GB* 0.00% 59394911263811417432307015371109991999 XX.XX.XX.XXB 0 0 Up Normal *67.34 KB* 0.00% 91902851206288351623775585543017122534 -- Looking for your prompt response. *Adeel*
Data backup and restore
Dear All, I have Cassandra 1.1.4 cluster with 2 nodes. I need to take backup and restore on staging for testing purpose. I have taken snapshot with below mentioned command but It created snapshot on every Keyspace's column family. Is there any other way to take backup and restore quick. /opt/apache-cassandra-1.1.4/bin/nodetool -h localhost snapshot -t cassandra_bkup _*Snapshot directory:*_ /var/log/cassandra/data//
Cassandra 1.1.4 performance issue
user %nice %system %iowait %steal %idle 12.520.000.000.000.00 87.48 Device: rrqm/s wrqm/s r/s w/srMB/swMB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdb1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 Device: rMB_nor/swMB_nor/srMB_dir/s wMB_dir/srMB_svr/swMB_svr/s ops/srops/swops/s Please help us to improve performance of Cassandra cluster as well as fix all issues. -- Thanks & Regards *Adeel**Akbar*
Re: Problem when starting Cassandra 1.1.5
Please upgrade the JAVA with 1.7.X then it will be working. Thanks & Regards *Adeel**Akbar* On 10/8/2012 1:36 PM, Thierry Templier wrote: Hello, I would want to upgrade Cassandra to version 1.1.5 but I have a problem when trying to start this version: $ ./cassandra -f xss = -ea -javaagent:./../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M -Xmn256M -XX:+HeapDumpOnOutOfMemoryError -Xss180k Segmentation fault Here is the Java version I used: $ java -version java version "1.6.0_24" OpenJDK Runtime Environment (IcedTea6 1.11.4) (6b24-1.11.4-1ubuntu0.10.04.1) OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode) Thanks very much for your help! Thierry
Issue after upgraded Cassandra 1.1.4
Hi, I have successfully upgraded 2 nodes ring Cassandra with latest version. After up-gradation, I found following exceptions. Please help me to resolve this issue. [2012-09-07 19:36:44,232] |TimedOutException() me.prettyprint.hector.api.exceptions.HTimedOutException: TimedOutException() at me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:35) at me.prettyprint.cassandra.service.KeyspaceServiceImpl$11.execute(KeyspaceServiceImpl.java:432) at me.prettyprint.cassandra.service.KeyspaceServiceImpl$11.execute(KeyspaceServiceImpl.java:416) at me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103) at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258) at me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:131) at me.prettyprint.cassandra.service.KeyspaceServiceImpl.getSuperSlice(KeyspaceServiceImpl.java:436) at me.prettyprint.cassandra.model.thrift.ThriftSuperSliceQuery$1.doInKeyspace(ThriftSuperSliceQuery.java:61) at me.prettyprint.cassandra.model.thrift.ThriftSuperSliceQuery$1.doInKeyspace(ThriftSuperSliceQuery.java:57) at me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20) at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:85) at me.prettyprint.cassandra.model.thrift.ThriftSuperSliceQuery.execute(ThriftSuperSliceQuery.java:56) at pringcore.database.dao.cassandra.PringerRecommendationDAOImp.getRecommendedToPringers(PringerRecommendationDAOImp.java:64) at pringcore.database.dao.cassandra.PringerRecommendationDAOImp.getRecommendations(PringerRecommendationDAOImp.java:75) at pringcore.database.Pringer.getRecommendations(Pringer.java:2109) at pringcore.database.Follower.getFollowerMessageOnStatus(Follower.java:459) at pringcore.commands.CoreFollowCommand.process(CoreFollowCommand.java:64) at smsprocessor.commands.FollowCommand.process(FollowCommand.java:34) at pringcore.processor.CoreCommandProcessor.process(CoreCommandProcessor.java:263) at smsprocessor.processor.CommandProcessor.run(CommandProcessor.java:164) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:722) Caused by: TimedOutException() at org.apache.cassandra.thrift.Cassandra$get_slice_result.read(Cassandra.java:7212) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_get_slice(Cassandra.java:543) at org.apache.cassandra.thrift.Cassandra$Client.get_slice(Cassandra.java:527) at me.prettyprint.cassandra.service.KeyspaceServiceImpl$11.execute(KeyspaceServiceImpl.java:421) ... 24 more -- Thanks & Regards *Adeel**Akbar*
Cassandra implement in two different data-center
Dear All, I am going to implement Apache Cassandra in two different data-center with 2 nodes in each ring. I also need to set replica 2 factor in same data center. Over the data center data should be replicates between both data center rings. Please help me or provide any document which help to implement this model. -- Thanks & Regards *Adeel**Akbar*
Re: Cassandra upgrade 1.1.4 issue
I have upgraded jdk from 1.6_u14 to 1.7_u06 and now its working. Thanks & Regards *Adeel**Akbar* On 8/24/2012 8:50 PM, Eric Evans wrote: On Fri, Aug 24, 2012 at 5:00 AM, Adeel Akbar wrote: I have upgraded cassandra on ring and one node successfully upgraded first node. On second node I got following error. Please help me to resolve this issue. [root@X]# /u/cassandra/apache-cassandra-1.1.4/bin/cassandra -f xss = -ea -javaagent:/u/cassandra/apache-cassandra-1.1.4/bin/../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms502M -Xmx502M -Xmn100M -XX:+HeapDumpOnOutOfMemoryError -Xss128k Segmentation fault Segmentation faults can be caused by software bugs, or by faulty hardware. If it is a software bug, it's very unlikely to be a Cassandra bug (there should be nothing we could do to cause a JVM segfault). I would take a close look at what is different between these two hosts, starting with the version of JVM. If you have a core dump, that might provide some insight (and if you don't, it wouldn't hurt to get one). Cheers,
Cassandra upgrade 1.1.4 issue
Hi, I have upgraded cassandra on ring and one node successfully upgraded first node. On second node I got following error. Please help me to resolve this issue. [root@X]# /u/cassandra/apache-cassandra-1.1.4/bin/cassandra -f xss = -ea -javaagent:/u/cassandra/apache-cassandra-1.1.4/bin/../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms502M -Xmx502M -Xmn100M -XX:+HeapDumpOnOutOfMemoryError -Xss128k Segmentation fault -- Thanks & Regards *Adeel**Akbar*
Re: Cassandra 1.1.4 RPM required
Dear Aaron, Its required username and password which I have not. Can yo share direct link? Thanks & Regards *Adeel**Akbar* On 8/23/2012 3:02 PM, aaron morton wrote: See step 1 here http://wiki.apache.org/cassandra/GettingStarted Cheers - Aaron Morton Freelance Developer @aaronmorton http://www.thelastpickle.com On 23/08/2012, at 7:40 PM, Adeel Akbar <mailto:adeel.ak...@panasiangroup.com>> wrote: Hi, I would like to install Apache Cassandra 1.1.4 from RPM. Please share a link to download rpm for CentOS (x86_64) and (i386). -- Thanks & Regards *Adeel**Akbar*
Cassandra 1.1.4 RPM required
Hi, I would like to install Apache Cassandra 1.1.4 from RPM. Please share a link to download rpm for CentOS (x86_64) and (i386). -- Thanks & Regards *Adeel**Akbar*
Re: Connection issue in Cassandra
I used Cassandra 0.8.1 and pycasa 0.2. If I upgrade pycasa, then it have compatibility issue. please suggest Thanks & Regards *Adeel**Akbar* On 7/25/2012 10:13 PM, Tyler Hobbs wrote: That's a pretty old version of pycassa; it was release before 0.7.0 came out. I suggest upgrading. It's possible this was caused by an old bug, but in general, this indicates that you have more threads trying to use the ConnectionPool concurrently than there are connections. On Wed, Jul 25, 2012 at 3:30 AM, Adeel Akbar mailto:adeel.ak...@panasiangroup.com>> wrote: Hi, I have created 2 node cluster and use with application. My application unable to connect with database. Please find below logs; NoConnectionAvailable at / ConnectionPool limit of size 2 overflow 2 reached, unable to obtain connection after 30 seconds Request Method: GET Request URL:http://172.16.100.131/ Django Version: 1.4 Exception Type: NoConnectionAvailable Exception Value: ConnectionPool limit of size 2 overflow 2 reached, unable to obtain connection after 30 seconds Exception Location: /usr/local/lib/python2.6/site-packages/pycassa-1.0.8-py2.6.egg/pycassa/pool.py in get, line 738 Python Executable: /usr/local/bin/python Python Version: 2.6.4 Python Path: ['/var/www/bs_ping', '/usr/local/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg', '/usr/local/lib/python2.6/site-packages/amqplib-0.6.1-py2.6.egg', '/usr/local/lib/python2.6/site-packages/BeautifulSoup-3.1.0.1-py2.6.egg', '/usr/local/lib/python2.6/site-packages/python_dateutil-1.4.1-py2.6.egg', '/usr/local/lib/python2.6/site-packages/feedparser-4.1-py2.6.egg', '/usr/local/lib/python2.6/site-packages/python_twitter-0.6-py2.6.egg', '/usr/local/lib/python2.6/site-packages/simplejson-2.0.9-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/txAMQP-0.3-py2.6.egg', '/usr/local/lib/python2.6/site-packages/Twisted-8.2.0-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/zope.interface-3.5.2-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/UnicodeUtils-0.3.2-py2.6.egg', '/usr/local/lib/python2.6/site-packages/pytz-2009p-py2.6.egg', '/usr/local/lib/python2.6/site-packages/ScriptUtils-0.5.5-py2.6.egg', '/usr/local/lib/python2.6/site-packages/MySQL_python-1.2.3c1-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/python_memcached-1.44-py2.6.egg', '/usr/local/lib/python2.6/site-packages/coverage-3.2b1-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/flup-1.0.3.dev_20091027-py2.6.egg', '/usr/local/lib/python2.6/site-packages/oauth-1.0.1-py2.6.egg', '/usr/local/lib/python2.6/site-packages/pyOpenSSL-0.10-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/pycassa-1.0.8-py2.6.egg', '/usr/local/lib/python2.6/site-packages/wadofstuff_django_serializers-1.1.0-py2.6.egg', '/usr/local/lib/python2.6/site-packages/jsonpickle-0.4.0-py2.6.egg', '/usr/local/lib/python2.6/site-packages/django_compressor-1.1.2-py2.6.egg', '/usr/local/lib/python2.6/site-packages/django_appconf-0.5-py2.6.egg', '/usr/local/lib/python26.zip', '/usr/local/lib/python2.6', '/usr/local/lib/python2.6/plat-linux2', '/usr/local/lib/python2.6/lib-tk', '/usr/local/lib/python2.6/lib-old', '/usr/local/lib/python2.6/lib-dynload', '/usr/local/lib/python2.6/site-packages', '/usr/local/lib/python2.6/site-packages/PIL', '/var/www/bs_ping/', '/var/www'] Server time:Wed, 25 Jul 2012 13:17:33 +0500 -- Thanks & Regards *Adeel**Akbar* -- Tyler Hobbs DataStax <http://datastax.com/>
Connection issue in Cassandra
Hi, I have created 2 node cluster and use with application. My application unable to connect with database. Please find below logs; NoConnectionAvailable at / ConnectionPool limit of size 2 overflow 2 reached, unable to obtain connection after 30 seconds Request Method: GET Request URL:http://172.16.100.131/ Django Version: 1.4 Exception Type: NoConnectionAvailable Exception Value: ConnectionPool limit of size 2 overflow 2 reached, unable to obtain connection after 30 seconds Exception Location: /usr/local/lib/python2.6/site-packages/pycassa-1.0.8-py2.6.egg/pycassa/pool.py in get, line 738 Python Executable: /usr/local/bin/python Python Version: 2.6.4 Python Path: ['/var/www/bs_ping', '/usr/local/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg', '/usr/local/lib/python2.6/site-packages/amqplib-0.6.1-py2.6.egg', '/usr/local/lib/python2.6/site-packages/BeautifulSoup-3.1.0.1-py2.6.egg', '/usr/local/lib/python2.6/site-packages/python_dateutil-1.4.1-py2.6.egg', '/usr/local/lib/python2.6/site-packages/feedparser-4.1-py2.6.egg', '/usr/local/lib/python2.6/site-packages/python_twitter-0.6-py2.6.egg', '/usr/local/lib/python2.6/site-packages/simplejson-2.0.9-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/txAMQP-0.3-py2.6.egg', '/usr/local/lib/python2.6/site-packages/Twisted-8.2.0-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/zope.interface-3.5.2-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/UnicodeUtils-0.3.2-py2.6.egg', '/usr/local/lib/python2.6/site-packages/pytz-2009p-py2.6.egg', '/usr/local/lib/python2.6/site-packages/ScriptUtils-0.5.5-py2.6.egg', '/usr/local/lib/python2.6/site-packages/MySQL_python-1.2.3c1-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/python_memcached-1.44-py2.6.egg', '/usr/local/lib/python2.6/site-packages/coverage-3.2b1-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/flup-1.0.3.dev_20091027-py2.6.egg', '/usr/local/lib/python2.6/site-packages/oauth-1.0.1-py2.6.egg', '/usr/local/lib/python2.6/site-packages/pyOpenSSL-0.10-py2.6-linux-i686.egg', '/usr/local/lib/python2.6/site-packages/pycassa-1.0.8-py2.6.egg', '/usr/local/lib/python2.6/site-packages/wadofstuff_django_serializers-1.1.0-py2.6.egg', '/usr/local/lib/python2.6/site-packages/jsonpickle-0.4.0-py2.6.egg', '/usr/local/lib/python2.6/site-packages/django_compressor-1.1.2-py2.6.egg', '/usr/local/lib/python2.6/site-packages/django_appconf-0.5-py2.6.egg', '/usr/local/lib/python26.zip', '/usr/local/lib/python2.6', '/usr/local/lib/python2.6/plat-linux2', '/usr/local/lib/python2.6/lib-tk', '/usr/local/lib/python2.6/lib-old', '/usr/local/lib/python2.6/lib-dynload', '/usr/local/lib/python2.6/site-packages', '/usr/local/lib/python2.6/site-packages/PIL', '/var/www/bs_ping/', '/var/www'] Server time:Wed, 25 Jul 2012 13:17:33 +0500 -- Thanks & Regards *Adeel**Akbar*
Snapshot issue in Cassandra 0.8.1
Hi, I have created snapshot with following command; #./nodetool -h localhost snapshot cassandra_01_bkup but the problem is, the snapshot is created on snapshot folder with different name (like 1342269988711) and I have no idea that if I used this command in script then how I gzip snapshot with script. Please help me to resolve this issue. -- Thanks & Regards *Adeel**Akbar*
snapshot issue
Hi, I am trying to taking snapshot of my data but faced following error. Please help me to resolve this issue. [root@cassandra1 bin]# ./nodetool -h localhost snapshot 20120711 Exception in thread "main" java.io.IOError: java.io.IOException: Cannot run program "ln": java.io.IOException: error=12, Cannot allocate memory at org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:1660) at org.apache.cassandra.db.ColumnFamilyStore.snapshot(ColumnFamilyStore.java:1686) at org.apache.cassandra.db.Table.snapshot(Table.java:198) at org.apache.cassandra.service.StorageService.takeSnapshot(StorageService.java:1393) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:111) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:45) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:226) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:251) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:857) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:795) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1450) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:90) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1285) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1383) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:807) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:322) at sun.rmi.transport.Transport$1.run(Transport.java:177) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:173) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:553) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:808) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:667) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) Caused by: java.io.IOException: Cannot run program "ln": java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:475) at org.apache.cassandra.utils.CLibrary.createHardLinkWithExec(CLibrary.java:181) at org.apache.cassandra.utils.CLibrary.createHardLink(CLibrary.java:147) at org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:730) at org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:1653) ... 33 more Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:468) ... 37 more -- Thanks & Regards *Adeel**Akbar*
Auto backup script
Hi, I have planned to backup of my cassandra 3 node ring database for security. Please let me know that what method i used for backup and if anyone have script, please share with me. -- Thanks & Regards *Adeel**Akbar*
Re: upgrade issue
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.constructJavaBean2ndStep(Constructor.java:240) ... 11 more null; Can't construct a java object for tag:yaml.org,2002:org.apache.cassandra.config.Config; exception=Cannot create property=commitlog_rotation_threshold_in_mb for JavaBean=org.apache.cassandra.config.Config@4dd36dfe; Unable to find property 'commitlog_rotation_threshold_in_mb' on class: org.apache.cassandra.config.Config Invalid yaml; unable to start server. See log for stacktrace. Thanks & Regards Adeel Akbar On 6/29/2012 6:16 PM, Viktor Jevdokimov wrote: Replace tabs with spaces in Cassandra.yaml Best regards / Pagarbiai Viktor Jevdokimov Senior Developer Email: viktor.jevdoki...@adform.com Phone: +370 5 212 3063, Fax +370 5 261 0453 J. Jasinskio 16C, LT-01112 Vilnius, Lithuania Follow us on Twitter: @adforminsider What is Adform: watch this short video Disclaimer: The information contained in this message and attachments is intended solely for the attention and use of the named addressee and may be confidential. If you are not the intended recipient, you are reminded that the information remains the property of the sender. You must not use, disclose, distribute, copy, print or rely on this e-mail. If you have received this message in error, please contact the sender immediately and irrevocably delete this message and any copies. From: Adeel Akbar [mailto:adeel.ak...@panasiangroup.com] Sent: Friday, June 29, 2012 12:53 To: user@cassandra.apache.org Subject: upgrade issue Hi, I have upgraded cassndra from 0.8.6 to 1.0.10 and found following errors once i started service; INFO 05:11:50,948 Logging initialized INFO 05:11:50,953 JVM vendor/version: OpenJDK 64-Bit Server VM/1.6.0_24 INFO 05:11:50,954 Heap size: 511705088/511705088 INFO 05:11:50,955 Classpath: /opt/apache-cassandra-1.0.10/bin/../conf:/opt/apache-cassandra-1.0.10/bin/../build/classes/main:/opt/apache-cassandra-1.0.10/bin/../build/classes/thrift:/opt/apache-cassandra-1.0.10/bin/../lib/antlr-3.2.jar:/opt/apache-cassandra-1.0.10/bin/../lib/apache-cassandra-1.0.10.jar:/opt/apache-cassandra-1.0.10/bin/../lib/apache-cassandra-clientutil-1.0.10.jar:/opt/apache-cassandra-1.0.10/bin/../lib/apache-cassandra-thrift-1.0.10.jar:/opt/apache-cassandra-1.0.10/bin/../lib/avro-1.4.0-fixes.jar:/opt/apache-cassandra-1.0.10/bin/../lib/avro-1.4.0-sources-fixes.jar:/opt/apache-cassandra-1.0.10/bin/../lib/commons-cli-1.1.jar:/opt/apache-cassandra-1.0.10/bin/../lib/commons-codec-1.2.jar:/opt/apache-cassandra-1.0.10/bin/../lib/commons-lang-2.4.jar:/opt/apache-cassandra-1.0.10/bin/../lib/compress-lzf-0.8.4.jar:/opt/apache-cassandra-1.0.10/bin/../lib/concurrentlinkedhashmap-lru-1.2.jar:/opt/apache-cassandra-1.0.10/bin/../lib/guava-r08.jar:/opt/apache-cassandra-1.0.10/bin/../lib/high-sca le-lib-1.1.2.j ar:/opt/apache-cassandra-1.0.10/bin/../lib/jackson-core-asl-1.4.0.jar:/opt/apache-cassandra-1.0.10/bin/../lib/jackson-mapper-asl-1.4.0.jar:/opt/apache-cassandra-1.0.10/bin/../lib/jamm-0.2.5.jar:/opt/apache-cassandra-1.0.10/bin/../lib/jline-0.9.94.jar:/opt/apache-cassandra-1.0.10/bin/../lib/json-simple-1.1.jar:/opt/apache-cassandra-1.0.10/bin/../lib/libthrift-0.6.jar:/opt/apache-cassandra-1.0.10/bin/../lib/log4j-1.2.16.jar:/opt/apache-cassandra-1.0.10/bin/../lib/servlet-api-2.5-20081211.jar:/opt/apache-cassandra-1.0.10/bin/../lib/slf4j-api-1.6.1.jar:/opt/apache-cassandra-1.0.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/apache-cassandra-1.0.10/bin/../lib/snakeyaml-1.6.jar:/opt/apache-cassandra-1.0.10/bin/../lib/snappy-java-1.0.4.1.jar INFO 05:11:50,957 JNA not found. Native methods will be disabled. INFO
upgrade issue
Hi, I have upgraded cassndra from 0.8.6 to 1.0.10 and found following errors once i started service; INFO 05:11:50,948 Logging initialized INFO 05:11:50,953 JVM vendor/version: OpenJDK 64-Bit Server VM/1.6.0_24 INFO 05:11:50,954 Heap size: 511705088/511705088 INFO 05:11:50,955 Classpath: /opt/apache-cassandra-1.0.10/bin/../conf:/opt/apache-cassandra-1.0.10/bin/../build/classes/main:/opt/apache-cassandra-1.0.10/bin/../build/classes/thrift:/opt/apache-cassandra-1.0.10/bin/../lib/antlr-3.2.jar:/opt/apache-cassandra-1.0.10/bin/../lib/apache-cassandra-1.0.10.jar:/opt/apache-cassandra-1.0.10/bin/../lib/apache-cassandra-clientutil-1.0.10.jar:/opt/apache-cassandra-1.0.10/bin/../lib/apache-cassandra-thrift-1.0.10.jar:/opt/apache-cassandra-1.0.10/bin/../lib/avro-1.4.0-fixes.jar:/opt/apache-cassandra-1.0.10/bin/../lib/avro-1.4.0-sources-fixes.jar:/opt/apache-cassandra-1.0.10/bin/../lib/commons-cli-1.1.jar:/opt/apache-cassandra-1.0.10/bin/../lib/commons-codec-1.2.jar:/opt/apache-cassandra-1.0.10/bin/../lib/commons-lang-2.4.jar:/opt/apache-cassandra-1.0.10/bin/../lib/compress-lzf-0.8.4.jar:/opt/apache-cassandra-1.0.10/bin/../lib/concurrentlinkedhashmap-lru-1.2.jar:/opt/apache-cassandra-1.0.10/bin/../lib/guava-r08.jar:/opt/apache-cassandra-1.0.10/bin/../lib/high-scale-lib-1.1.2.j ar:/opt/apache-cassandra-1.0.10/bin/../lib/jackson-core-asl-1.4.0.jar:/opt/apache-cassandra-1.0.10/bin/../lib/jackson-mapper-asl-1.4.0.jar:/opt/apache-cassandra-1.0.10/bin/../lib/jamm-0.2.5.jar:/opt/apache-cassandra-1.0.10/bin/../lib/jline-0.9.94.jar:/opt/apache-cassandra-1.0.10/bin/../lib/json-simple-1.1.jar:/opt/apache-cassandra-1.0.10/bin/../lib/libthrift-0.6.jar:/opt/apache-cassandra-1.0.10/bin/../lib/log4j-1.2.16.jar:/opt/apache-cassandra-1.0.10/bin/../lib/servlet-api-2.5-20081211.jar:/opt/apache-cassandra-1.0.10/bin/../lib/slf4j-api-1.6.1.jar:/opt/apache-cassandra-1.0.10/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/apache-cassandra-1.0.10/bin/../lib/snakeyaml-1.6.jar:/opt/apache-cassandra-1.0.10/bin/../lib/snappy-java-1.0.4.1.jar INFO 05:11:50,957 JNA not found. Native methods will be disabled. INFO 05:11:50,966 Loading settings from file:/opt/apache-cassandra-1.0.10/conf/cassandra.yaml ERROR 05:11:51,057 Fatal configuration error error while scanning for the next token found character '\t' that cannot start any token in "", line 102, column 1: - seeds: "172.16.100.244,172. ... ^ at org.yaml.snakeyaml.scanner.ScannerImpl.fetchMoreTokens(ScannerImpl.java:357) at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:180) at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingValue.produce(ParserImpl.java:591) at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:162) at org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:147) at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:131) at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:229) at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:159) at org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:203) at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:157) at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:229) at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:159) at org.yaml.snakeyaml.composer.Composer.composeDocument(Composer.java:121) at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:104) at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:117) at org.yaml.snakeyaml.Loader.load(Loader.java:52) at org.yaml.snakeyaml.Yaml.load(Yaml.java:166) at org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:131) at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:131) at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:356) at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:107) while scanning for the next token; found character '\t' that cannot start any token Invalid yaml; unable to start server. See log for stacktrace. -- Thanks & Regards *Adeel**Akbar*
RE: Cassandra 1 node crashed in ring
Hi, I have done same and now its displayed three node in ring. How I remove crashed node as well as what about data ? root@zerg:~/apache-cassandra-0.8.1/bin# ./nodetool -h XXX.XX.XXX.XX ring Address DC RackStatus State LoadOwns Token 147906224866113468886003862620136792702 XX.XX.XX.XX 16 100 Up Normal 17.37 MB 14.93% 3159755813495848170708142250209621026 XX.XX.XX.XX 16 100 Down Normal ? 23.56% 43237339313998282086051322460691860905 XX.XX.XX.XX 16 100 Up Normal 15.21 KB 61.52% 147906224866113468886003862620136792702 Thanks & Regards Adeel Akbar -Original Message- From: rohit bhatia [mailto:rohit2...@gmail.com] Sent: Thursday, June 07, 2012 12:28 PM To: user@cassandra.apache.org Subject: Re: Cassandra 1 node crashed in ring Restart cassandra on new node with autobootstrap as true, seed node as the existing node in the cluster and an appropriate token... You should not need to run nodetool repair as autobootstrap would take care of it. On Thu, Jun 7, 2012 at 12:22 PM, Adeel Akbar wrote: > Hi, > > > > I am running 2 nodes of Cassandra 0.8.1 in ring with replication factor 2. > Last night one of the Cassandra servers crashed and now we are running > on single node. Please help me that how I add new node in ring and its > gets all update/data which lost in crash server. > > > > Thanks & Regards > > > > Adeel Akbar > >
Cassandra 1 node crashed in ring
Hi, I am running 2 nodes of Cassandra 0.8.1 in ring with replication factor 2. Last night one of the Cassandra servers crashed and now we are running on single node. Please help me that how I add new node in ring and its gets all update/data which lost in crash server. Thanks & Regards Adeel Akbar
Cassandra Upgrade from 0.8.1
Dear Guys, Thank you so much for your reply. Currently I have two Cassandra nodes running in ring. I have installed Cassandra on following location; /root/apache-cassandra-0.8.1 Now my questions are; 1. How we upgrade (Step by Step version like 0.8.1 to 0.8.5, then 0.8.5 to 1.0.0to 1.1.0) 2. Once I take snapshot, is it have full data or only one node data? 3. Once I download apache-cassandra-0.8.5-bin.tar.gz then after untar, What Can I do? I only move few folder from previous version directory to new version directory or move all directories and files. 4. After moving data is there any command required ? Thanks & Regards Adeel Akbar
RPM of Cassandra 1.1.0
Hi, I need to install Apache Cassandra 1.1.0 from RPM. Please provide me link to download rpm for CentOS. Thanks & Regards Adeel Akbar
Cassandra upgrade from 0.8.1 to 1.1.0
Hi, I am running Cassandra 2 nodes in ring with apache-cassandra-0.8.1 version. Now I would like to upgrade on latest stable version i.e. 1.1.0. Please help me that how to upgrade on latest stable version. Thanks & Regards Adeel Akbar