rows: ParseError - badly formed hexadecimal UUID
> string, given up without retries*
>
>
>
> So, how do I import my CSV file and set the columns which does not have a
> UUID to null ?
>
>
>
> -Tobias
>
--
<http://www.datastax.com/>
STEFANIA ALBORGHETTI
Software engineer | +852 6114 9265 | stefania.alborghe...@datastax.com
[image: http://www.datastax.com/cloud-applications]
<http://www.datastax.com/cloud-applications>
bexec/vendor/lib/python2.7/site-
> packages/cassandra/__init__.pyc'>
> Using connect timeout: 5 seconds
> Using 'utf-8' encoding
> Using ssl: False
> Connected to Test Cluster at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native protocol v
pr 4, 2017 at 6:12 PM, Boris Babic wrote:
> Thanks Stefania, going from memory don't think I noticed this on windows
> but haven't got a machine handy to test it on at the moment.
>
> On Apr 4, 2017, at 19:44, Stefania Alborghetti datastax.com> wrote:
>
> I'
eyspace.data (id,varint)
> values(1,-9223372036854775808898989898)
> ;
>
> Had not observed this before on other OS, is this something todo with the
> way the copy from parser is interpreting varint for values <= -2^63 ?
>
> Thanks for any input
> Boris
>
>
>
>
with multiple nodes cluster - it
> doesn't impact on performance.
> However, I see that COPY FROM is CPU bounded on my machines, so these
> steps should definitely increase the performance.
>
>
> The question: what I did wrong? Maybe some step is missed.
> How to check t
readed using of CQLSSTableWriter?
>> Maybe ready to use libraries like: https://github.com/spotify/hdfs2cass?
>>
>> 3. sstableloader is slow too. Assuming that I have new empty C* cluster,
>> how can I improve the upload speed? Maybe disable replication or some other
>
ATH.
>
> You might try "pip install cassandra-driver".
>
> Python: /opt/isv/python27/bin/python
>
> Error: can't decompress data; zlib not available
>
> ---
>
> What am I missing?
> -- Jacob Shadix
>
--
Stefania Alborghetti
|+852 6114 9265| stefania.alborghe...@datastax.com
28a,f7ce3ac0-a66e-11e6-b58e-4e29450fd577,SA,2"
> > data.csv
>
>
> table definitions, first one is with counter column, second with
> int column
>
>
> CREATE TABLE woc.table_test (
> object_id ascii,
> user_id timeuuid,
> counter_id ascii,
> count counter,
> PRIMARY KEY ((object_id, user_id), counter_id)
> );
>
> DROP TABLE woc.table_test;
>
> CREATE TABLE woc.table_test (
> object_id ascii,
> user_id timeuuid,
> counter_id ascii,
> count int,
> PRIMARY KEY ((object_id, user_id), counter_id)
> );
>
--
Stefania Alborghetti
|+852 6114 9265| stefania.alborghe...@datastax.com
echpyaasa .
>> wrote:
>>
>>> Can some one please tell me how to set TTL using COPY command?
>>
>>
>> It looks like you're using Cassandra 2.0. I don't think COPY supports
>> the TTL option until at least 2.1.
>>
>>
>> --
>> Tyler Hobbs
>> DataStax <http://datastax.com/>
>>
>
>
--
Stefania Alborghetti
|+852 6114 9265| stefania.alborghe...@datastax.com
clustr/cassandra
>
> Read the README at the repo for more info.
>
> Kurt Greaves
> k...@instaclustr.com
> www.instaclustr.com
>
--
Stefania Alborghetti
|+852 6114 9265| stefania.alborghe...@datastax.com
tLog.recoverFiles(CommitLog.java:187)
>>> [apache-cassandra-3.9.jar:3.9]*
>>> * at
>>> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:167)
>>> [apache-cassandra-3.9.jar:3.9]*
>>> * at
>>> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:323)
>>> [apache-cassandra-3.9.jar:3.9]*
>>> * at
>>> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:601)
>>> [apache-cassandra-3.9.jar:3.9]*
>>> * at
>>> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:730)
>>> [apache-cassandra-3.9.jar:3.9]*
>>>
>>>
>>> I then have to do 'sudo rm -rf /var/lib/cassandra/commitlog/*' which
>>> fixes the problem, but then I lose all of my data.
>>>
>>> It looks like its saying there wasn't enough data to read the field
>>> 'board_id', any ideas why that would be?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>
--
Stefania Alborghetti
|+852 6114 9265| stefania.alborghe...@datastax.com
assandra to rule them all ?
>>> >>>>
>>> >>>> What we are trying to achieve is to minimize the moving part on our
>>> system.
>>> >>>>
>>> >>>> Any response would be really appreciated.
>>> >>>>
>>> >>>>
>>> >>>> Cheers
>>> >>>>
>>> >>>> --
>>> >>>> Welly Tambunan
>>> >>>> Triplelands
>>> >>>>
>>> >>>> http://weltam.wordpress.com <http://weltam.wordpress.com>
>>> >>>> http://www.triplelands.com <http://www.triplelands.com/blog/>
>>> >>>
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Welly Tambunan
>>> >> Triplelands
>>> >>
>>> >> http://weltam.wordpress.com <http://weltam.wordpress.com>
>>> >> http://www.triplelands.com <http://www.triplelands.com/blog/>
>>> >
>>> >
>>> This email is confidential and may be subject to privilege. If you are
>>> not the intended recipient, please do not copy or disclose its content but
>>> contact the sender immediately upon receipt.
>>>
>>
>>
>>
>> --
>> Welly Tambunan
>> Triplelands
>>
>> http://weltam.wordpress.com
>> http://www.triplelands.com <http://www.triplelands.com/blog/>
>>
>
>
>
> --
> Welly Tambunan
> Triplelands
>
> http://weltam.wordpress.com
> http://www.triplelands.com <http://www.triplelands.com/blog/>
>
--
Stefania Alborghetti
|+852 6114 9265| stefania.alborghe...@datastax.com
sandra 2.1.11 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
>
> On Thu, Oct 20, 2016 at 10:33 AM, Stefania Alborghetti <
> stefania.alborghe...@datastax.com> wrote:
>
>> Have you already tried using unset values?
>>
>> http://www.datastax.com/dev/blog/datastax-jav
olumn4)
>> ) WITH COMPACT STORAGE AND
>> bloom_filter_fp_chance=0.01 AND
>> caching='{"keys":"ALL", "rows_per_partition":"NONE"}' AND
>> comment='' AND
>> dclocal_read_repair_chance=0.10 AND
>> gc_grace_seconds=432000 AND
>> read_repair_chance=0.00 AND
>> default_time_to_live=0 AND
>> speculative_retry='NONE' AND
>> memtable_flush_period_in_ms=0 AND
>> compaction={'class': 'SizeTieredCompactionStrategy'} AND
>> compression={'sstable_compression': 'LZ4Compressor'};
>>
>> --
>> Thanks,
>> Lijun Huang
>>
>>
>>
>
>
> --
> Best regards,
> Lijun Huang
>
--
Stefania Alborghetti
|+852 6114 9265| stefania.alborghe...@datastax.com
;>>>>>>>> '--connect-timeout', cqlsh
>>>>>>>>>>>>>> seems don't have such parameters.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 2016-03-18 17:29 GMT+08:0
; www.more4fi.de
>>>
>>> Sitz der Gesellschaft: Solingen | HRB 25917| Amtsgericht Wuppertal
>>> Vorstand: Michael Hochgürtel . Mirko Novakovic . Rainer Vehns
>>> Aufsichtsrat: Patric Fedlmeier (Vorsitzender) . Klaus Jäger . Jürgen
>>> Schütz
>>>
>>> Diese E-Mail einschließlich evtl. beigefügter Dateien enthält
>>> vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht
>>> der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben,
>>> informieren Sie bitte sofort den Absender und löschen Sie diese E-Mail und
>>> evtl. beigefügter Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder
>>> Öffnen evtl. beigefügter Dateien sowie die unbefugte Weitergabe dieser
>>> E-Mail ist nicht gestattet
>>>
>>
>>
>> --
>>
>>
>>
>>
>
>
> --
> Matthias Niehoff | IT-Consultant | Agile Software Factory | Consulting
> codecentric AG | Zeppelinstr 2 | 76185 Karlsruhe | Deutschland
> tel: +49 (0) 721.9595-681 | fax: +49 (0) 721.9595-666 | mobil: +49 (0)
> 172.1702676
> www.codecentric.de | blog.codecentric.de | www.meettheexperts.de |
> www.more4fi.de
>
> Sitz der Gesellschaft: Solingen | HRB 25917| Amtsgericht Wuppertal
> Vorstand: Michael Hochgürtel . Mirko Novakovic . Rainer Vehns
> Aufsichtsrat: Patric Fedlmeier (Vorsitzender) . Klaus Jäger . Jürgen Schütz
>
> Diese E-Mail einschließlich evtl. beigefügter Dateien enthält vertrauliche
> und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige
> Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie
> bitte sofort den Absender und löschen Sie diese E-Mail und evtl.
> beigefügter Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder Öffnen
> evtl. beigefügter Dateien sowie die unbefugte Weitergabe dieser E-Mail ist
> nicht gestattet
>
--
[image: datastax_logo.png] <http://www.datastax.com/>
Stefania Alborghetti
Apache Cassandra Software Engineer
|+852 6114 9265| stefania.alborghe...@datastax.com
[image: cassandrasummit.org/Email_Signature]
<http://cassandrasummit.org/Email_Signature>
Stefania for the informative answer. The next blog was pretty
>> useful as well:
>> http://www.datastax.com/dev/blog/how-we-optimized-cassandra-cqlsh-copy-from
>> . Ill upgrade to 3.0.5 and test with C extensions enabled and report on
>> this thread.
>>
>> On Sa
oint the processes just hang. The parent process was
> consuming 95% system memory when it had processed around 60% data.
>
> I had earlier tried to feed in data from multiple files (Less than 4GB
> each) and it was working as expected.
>
> Is it a valid scenario?
>
> Regar
uential read to merge.
>
> If I have something wrong, I'm glad if you could correct.
>
>
> Regards,
> Satoshi
>
>
> On Thu, Mar 17, 2016 at 5:19 PM, Stefania Alborghetti <
> stefania.alborghe...@datastax.com> wrote:
>
>> Q1. Readers are created as needed
ns.
>>
>> - Cassandra node: An aws EC2 instance(t2.medium. 4GBRAM, 2vCPU)
>> - Cassandra version: 2.2.5
>> - inserted data size: about 100GB
>> - cassandra-env.sh: default
>> - cassandra.yaml
>> - compaction_throughput_mb_per_sec: 8 (or default)
>> - concurre
file. But I'm not sure what it does actually mean. When the cache is
> invalidated? And What happens is after cache invalidation?
>
>
> Regards,
> Satoshi
>
--
[image: datastax_logo.png] <http://www.datastax.com/>
Stefania Alborghetti
Apache Cassandra Software Engineer
|+852 6114 9265| stefania.alborghe...@datastax.com
he/cassandra/stress/generate/values/Strings.java.
Changing this line:
chars[i++] = (char) (((v & 127) + 32) & 127);
with this:
chars[i++] = (char) (((v & 127) % 95) + 32);
should work but I could not avoid the expensive modulo operation. You can
rebuild cassandra-stress with ant stre
_udf', 'event_client_date'].
>
> load_ravitest.cql:5:13 child process(es) died unexpectedly, aborting
>
> the typo was in the name of event_client_date. It should have been
> client_event_date.
>
>
--
[image: datastax_logo.png] <http://www.datastax.com/>
Stefania Alborghetti
Apache Cassandra Software Engineer
|+852 6114 9265| stefania.alborghe...@datastax.com
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at
> org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:789)
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
>
> Cheers
>
> On Mon, Feb 8, 2016 at 5:36 PM, Stefania Alborghetti <
> stefania.albo
at
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:564)
> ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046]
> at com.datastax.bdp.DseModule.main(DseModule.java:74)
> [dse-core-4.8.4.jar:4.8.4]
> INFO [Thread-2] 2016-02-08 16:45:27,629 DseDaemon.ja
;
>>> vs.
>>>
>>> cqlsh:events> select created_at FROM event_by_user_timestamp ;
>>>
>>> created_at
>>> ------
>>> 2016-01-04 15:05:47+
>>>
>>> (1 rows)
>>> cqlsh:events>
>>>
>>> To make things even more complicated the JSON timestamp is not returned
>>> in UTC. Is there a way to either tell the driver/C* to return the JSON date
>>> in UTC or add the timezone information (much preferred) to the text
>>> representation of the timestamp?
>>>
>>>
>>> Thanks!
>>> Ralf
>>
>> --
>> Alexandre Dutra
>> Driver & Tools Engineer @ DataStax
>>
>>
>> --
> Alexandre Dutra
> Driver & Tools Engineer @ DataStax
>
--
[image: datastax_logo.png] <http://www.datastax.com/>
Stefania Alborghetti
Apache Cassandra Software Engineer
|+852 6114 9265| stefania.alborghe...@datastax.com
>>>>>
>>>>>> On Tue, Feb 9, 2016 at 12:14 AM, Ted Yu wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>> I am trying to setup a cluster with DSE 4.8.4
>>>>>>>
>>>>>>> I added the following in resources/cassandra/conf/cassandra.yaml :
>>>>>>>
>>>>>>> cluster_name: 'cass'
>>>>>>>
>>>>>>> which resulted in:
>>>>>>>
>>>>>>> http://pastebin.com/27adxKTM
>>>>>>>
>>>>>>> This seems to be resolved by CASSANDRA-8072
>>>>>>>
>>>>>>> My question is whether there is workaround ?
>>>>>>> If not, when can I expect 2.1.13 release ?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
--
[image: datastax_logo.png] <http://www.datastax.com/>
Stefania Alborghetti
Apache Cassandra Software Engineer
|+852 6114 9265| stefania.alborghe...@datastax.com
"COPY TO/FROM" when dataset > 30 000 rows.
> Also maybe it is too weak hadrware (AWS EC2 t2.small), but even on
> t2.large I had timeout on COPY, just on 300 000 rows.
>
> My configuration is default config from deb package. Maybe somebody know
> what I should to tweak th
28 matches
Mail list logo