Re: Riak KV memory backend did't discarding oldest objects when met max memory

2016-09-12 Thread 周磊
Dear friends,

Any idea about this?

Best Regards & Thanks

John

2016-09-01 11:58 GMT+08:00 周磊 :

> Dear Friends,
> I'm sorry to disturb you, could you help on this?
>
> ENV
>
> OS: CenterOS 6.5
> Riak KV:2.1.4
> Installed by:
>
> wget http://s3.amazonaws.com/downloads.basho.com/riak/2.1/
> 2.1.4/rhel/6/riak-2.1.4-1.el6.x86_64.rpm
> sudo rpm -Uvh riak-2.1.4-1.el6.x86_64.rpm
>
> My Setting:
>
> storage_backend = multi
> multi_backend.bitcask_multi.storage_backend = bitcask
> multi_backend.bitcask_multi.bitcask.data_root = /var/lib/riak/bitcask_mult
> multi_backend.memory_multi.storage_backend = memory
> multi_backend.memory_multi.memory_backend.max_memory_per_vnode = 2MB
> multi_backend.default = memory_multi
>
> Document
>
> http://docs.basho.com/riak/kv/2.1.4/setup/planning/backend/memory/
> 
>
> When the threshold value that you set has been met in a particular vnode, Riak
> will begin discarding objects, beginning with the *oldest* object and
> proceeding until memory usage returns below the allowable threshold.
> You can configure maximum memory using the memory_backend.max_memory_per_vnode
> setting. You can specify max_memory_per_vnode however you’d like, using
> kilobytes, megabytes, or even gigabytes.
>
> Setps
>
> 1.Use sh to Loop POST [512kb] files (0.ts,index.m3u8,1.ts,index.
> m3u8,2.ts,index.m3u8)
> 2.When meet the max memory 2 MB, some object discarded but not beginning
> with the oldest object(index.m3u8 discarded)
> curl.sh.txt 
>
> http://10.20.122.45:8098/buckets/test/keys?keys=true
>
> {"keys":["217.ts","212.ts","203.ts","210.ts","173.ts","
> 166.ts","200.ts","199.ts","192.ts","215.ts","129.ts","
> 124.ts","208.ts","179.ts","198.ts","196.ts","185.ts","97.
> ts","219.ts","114.ts","201.ts","190.ts","165.ts","223.ts","
> 220.ts","214.ts","222.ts","211.ts","209.ts","213.ts","
> 183.ts","143.ts","205.ts","171.ts","139.ts","66.ts","142.
> ts","122.ts","216.ts","170.ts","162.ts","194.ts","119.ts","
> 105.ts","178.ts","160.ts","158.ts","221.ts","193.ts","
> 187.ts","197.ts","159.ts","155.ts","218.ts","207.ts","
> 184.ts","188.ts","181.ts","176.ts","206.ts","202.ts","195.ts"]}
>
>
>
> Best Regards & Thanks
>
> John
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.1.3 - Multiple indexes created by Solr for the same Riak object

2016-09-12 Thread Fred Dushin
Hi Weixi,

You might have to try describing your use case in more detail.  Solr Indices 
are independent from Riak objects.  They are, instead, associated with riak 
buckets (or bucket types), and an object (key/value) can only be associated 
with one bucket.  Therefore, a Riak object can only be associated with one Solr 
index.  A Solr index can be associated with multiple buckets, but in general 
the mapping from Riak objects to Solr indices is injective.

Is it possible that you changed the index associated with a bucket at some 
point in the bucket or bucket type lifecycle?

-Fred

> On Sep 10, 2016, at 9:27 PM, Weixi Yen  wrote:
> 
> Sort of a unique case, my app was under heavy stress and one of my riak nodes 
> got backed up (other 4 nodes were fine).
> 
> I think this caused Riak.update to create an extra index in Solr for the same 
> object when users began running .update on that object.
> 
> I have basically 2 questions:
> 
> 1) Is what I'm describing something that is possible?
> 
> 2) Is there a way to tell Solr to re-index one single item and get rid of all 
> other indexes of that item?
> 
> Considering RiakTS to resolve these issues long term, but have to stick with 
> Solr for at least the next 3 months, would appreciate any insight into how to 
> solve this duplicate index problem.
> 
> Thanks,
> 
> Weixi
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: a weird error while post request to server for store object

2016-09-12 Thread Christopher Mancini
Hi Alan,

So, the quick answer, is that CRDTs and secondary indexes are not supported
on riak-ts at the moment which explains why you are getting errors. To
complete the example you were working on, you would need to run it against
riak-kv.

The long answer is our goal is to have TS and KV merged together so you can
have the features of both on one instance of Riak but unfortunately I do
not have a timeline for when that will be available.

Chris

On Wed, Sep 7, 2016 at 9:43 PM HQS^∞^  wrote:

> Hi Luke :
>I'm sorry , I did not say clear my develop environment which I
> mentioned. I deployed three riak-ts server(at least version 1.3) in
> separate vmware virtual machine , PHP Client Libaray version is 2.0  and
> the riak-ts version is 1.3.0.
>
> Regards
>Alan
>
> -- 原始邮件 --
> *发件人:* "Luke Bakken";;
> *发送时间:* 2016年9月7日(星期三) 晚上9:27
> *收件人:* "HQS^∞^";
> *抄送:* "riak-users";
> *主题:* Re: a weird error while post request to server for store object
>
> Hello Alan -
>
> Which PHP client library are you using?
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
> On Tue, Sep 6, 2016 at 10:29 PM, HQS^∞^  wrote:
> > dear everyone:
> > I follow the tutorial at
> > http://docs.basho.com/riak/kv/2.1.4/developing/usage/document-store/   ,
> > Step by Step Practice  , when I've Post a request for store object , but
> the
> > riak server respond 400 (Bad Request)  ,  I review my code again and
> again ,
> > but  no problem found . see below:
> >
> >  >
> >
> > class BlogPost {
> >  var  $_title = '';
> >  var  $_author = '';
> >  var  $_content = '';
> >  var  $_keywords = [];
> >  var  $_datePosted = '';
> >  var  $_published = false;
> >  var  $_bucketType = "cms";
> >  var  $_bucket = null;
> >  var  $_riak = null;
> >  var  $_location = null;
> >   public function __construct(Riak $riak, $bucket, $title, $author,
> > $content, array $keywords, $date, $published)
> >   {
> > $this->_riak = $riak;
> > $this->_bucket = new Bucket($bucket, "cms");
> > $this->_location = new Riak\Location('blog1',$this->_bucket,"cms");
> > $this->_title = $title;
> > $this->_author = $author;
> > $this->_content = $content;
> > $this->_keywords = $keywords;
> > $this->_datePosted = $date;
> > $this->_published = $published;
> >   }
> >
> >   public function store()
> >   {
> > $setBuilder = (new UpdateSet($this->_riak));
> >
> > foreach($this->_keywords as $keyword) {
> >   $setBuilder->add($keyword);
> > }
> > /*
> >(new UpdateMap($this->_riak))
> >   ->updateRegister('title', $this->_title)
> >   ->updateRegister('author', $this->_author)
> >   ->updateRegister('content', $this->_content)
> >   ->updateRegister('date', $this->_datePosted)
> >   ->updateFlag('published', $this->_published)
> >   ->updateSet('keywords', $setBuilder)
> >   ->withBucket($this->_bucket)
> >   ->build()
> >   ->execute();
> >
> > */
> >$response = (new UpdateMap($this->_riak))
> >   ->updateRegister('title', $this->_title)
> >   ->updateRegister('author', $this->_author)
> >   ->updateRegister('content', $this->_content)
> >   ->updateRegister('date', $this->_datePosted)
> >   ->updateFlag('published', $this->_published)
> >   ->updateSet('keywords', $setBuilder)
> >   ->atLocation($this->_location)
> >   ->build()
> >   ->execute();
> >
> > echo '';
> >   var_dump($response);
> > echo '';
> >   }
> > }
> >
> >  $node = (new Node\Builder)
> > ->atHost('192.168.111.2')
> > ->onPort(8098)
> > ->build();
> >
> > $riak = new Riak([$node]);
> >
> >
> > $keywords = ['adorbs', 'cheshire'];
> > $date = new \DateTime('now');
> >
> >
> > $post1 = new BlogPost(
> >   $riak,
> >   'cat_pics', // bucket
> >   'This one is so lulz!', // title
> >   'Cat Stevens', // author
> >   'Please check out these cat pics!', // content
> >   $keywords, // keywords
> >   $date, // date posted
> >   true // published
> > );
> > $post1->store();
> >
> > the wireshark captured packet :
> >
> >  192.168.171.124(client ip)  =>  192.168.111.2(riak server ip)HTTP
> > 511POST /types/cms/buckets/cat_pics/datatypes/alldoc? HTTP/1.1
> > (application/json)
> >  192.168.111.2192.168.171.124HTTP251HTTP/1.1 400 Bad
> Request
> >
> >  GET http://192.168.111.2:8098//types/cms/buckets/cat_pics/props
> >
> {"props":{"name":"cat_pics","young_vclock":20,"w":"quorum","small_vclock":50,"search_index":"blog_posts","rw":"quorum","r":"quorum","pw":0,"precommit":[],"pr":0,"postcommit":[],"old_vclock":86400,"notfound_ok":true,"n_val":3,"linkfun":{"mod":"riak_kv_wm_link_walker","fun":"mapreduce_linkfun"},"last_write_wins":false,"dw":"quorum","dvv_enabled":true,"chash_keyfun":{"mod":"riak_core_util","fun":"chash_std_keyfun"},"big_vclock":50,"basic_quorum":false,"allow_mult":true,"datatype":"map","active":true,"cl

Riak 2.1.3 - Multiple indexes created by Solr for the same Riak object

2016-09-12 Thread Weixi Yen
Sort of a unique case, my app was under heavy stress and one of my riak
nodes got backed up (other 4 nodes were fine).

I think this caused Riak.update to create an extra index in Solr for the
same object when users began running .update on that object.

I have basically 2 questions:

1) Is what I'm describing something that is possible?

2) Is there a way to tell Solr to re-index one single item and get rid of
all other indexes of that item?

Considering RiakTS to resolve these issues long term, but have to stick
with Solr for at least the next 3 months, would appreciate any insight into
how to solve this duplicate index problem.

Thanks,

Weixi
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com