Re: Does riak control work?
PS using fqdn node not work as well for riak name root@ip-10-234-223-183:/etc/riak# riak start riak failed to start within 15 seconds, see the output of 'riak console' for more information. If you want to wait longer, set the environment variable WAIT_FOR_ERLANG to the number of seconds to wait. root@ip-10-234-223-183:/etc/riak# riak console Node 'r...@ip-10-234-223-183.eu-west-1.compute.internal' not responding to pings. config is OK Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot /usr/lib/riak/releases/1.4.2/riak -config /etc/riak/app.config -pa /usr/lib/riak/lib/basho-patches -args_file /etc/riak/vm.args -- console Root: /usr/lib/riak Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:1:1] [async-threads:64] [kernel-poll:true] /usr/lib/riak/lib/os_mon-2.2.9/priv/bin/memsup: Erlang has closed. Erlang has closed {Kernel pid terminated,application_controller,{application_start_failure,riak_core,{shutdown,{riak_core_app,start,[normal,[]] Crash dump was written to: /var/log/riak/erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,riak_core,{shutdown,{riak_core_app,start,[normal,[]]}}}) root@ip-10-234-223-183:/etc/riak# Will older versions of riak work? On Mon, Sep 9, 2013 at 6:17 PM, David Montgomery davidmontgom...@gmail.comwrote: I am using the lastest ubuntu deb package. https://localhost:8069/admin I get page not found The connection was reset The connection to the server was reset while the page was loading. The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer's network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. Is riak running on the port? Yup root@ubuntu-VirtualBox:/etc/riak# lsof -i :8069 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME beam.smp 1376 riak 21u IPv4 3238716 0t0 TCP localhost:8069 (LISTEN) I followed the instructions here http://docs.basho.com/riak/latest/ops/advanced/riak-control/ %% https is a list of IP addresses and TCP ports that the Riak %% HTTPS interface will bind. {https, [{ 127.0.0.1, 8069 }]}, %% Set to false to disable the admin panel. {enabled, true}, So...what could be wrong? Yes..I restarted riak after I make the change to the config file. Thanks ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Riak and ec2 with vm.args - Node 'r...@ip-10-234-117-74.eu-west-1.compute.internal' not responding to pings.
Hi, I am having a very difficult time installing riak on ec2. To start ...I cant get past using private ip address or private dns when setting vm.args -name riak@127.0.0.1 root@ip-10-234-117-74:/home/ubuntu# hostname --fqdn ip-10-234-117-74.eu-west-1.compute.internal riak start riak ping Node 'r...@ip-10-234-117-74.eu-west-1.compute.internal' not responding to pings. When I use 127.0.0.1 and then I run riak start then works. riak start root@ip-10-234-117-74:/etc/riak# riak ping pong I am using m1.large on ubuntu 12.04 and the latest version of riak How does one get past this point? ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak and ec2 with vm.args - Node 'r...@ip-10-234-117-74.eu-west-1.compute.internal' not responding to pings.
Its both the fqdn or internal ip address. Either waysame issue riak refuses to work with ip address or fqdnand I dont know why. on bash page the recommend using fqdn which adds to tbhe confusion http://docs.basho.com/riak/latest/ops/building/basic-cluster-setup/ *Node Names* Use fully qualified domain names (FQDNs) rather than IP addresses for the cluster member node names. For example, “r...@cluster.example.com” and “ riak@192.168.1.10” are both acceptable node naming schemes, but using the FQDN style is preferred. Once a node has been started, in order to change the name you must either remove ring files from the data directory, riak-admin reiphttp://docs.basho.com/riak/latest/ops/running/tools/riak-admin/#reipthe node, or riak-admin cluster force-replacehttp://docs.basho.com/riak/latest/ops/running/tools/riak-admin/#cluster-force-replacethe node. On Sun, Sep 15, 2013 at 9:10 PM, Jeremiah Peschka jeremiah.pesc...@gmail.com wrote: Starting on page 3, the Riak on AWS [1] whitepaper has some instructions on getting things set up. In the docs, they recommend using the AWS internal IP address instead of the FQDN. This advice is also repeated in the Riak wiki [2]. [1]: http://media.amazonwebservices.com/AWS_NoSQL_Riak.pdf [2]: http://docs.basho.com/riak/latest/ops/building/installing/aws-marketplace/ --- Jeremiah Peschka - Founder, Brent Ozar Unlimited MCITP: SQL Server 2008, MVP Cloudera Certified Developer for Apache Hadoop On Sun, Sep 15, 2013 at 1:06 AM, David Montgomery davidmontgom...@gmail.com wrote: Hi, I am having a very difficult time installing riak on ec2. To start ...I cant get past using private ip address or private dns when setting vm.args -name riak@127.0.0.1 root@ip-10-234-117-74:/home/ubuntu# hostname --fqdn ip-10-234-117-74.eu-west-1.compute.internal riak start riak ping Node 'r...@ip-10-234-117-74.eu-west-1.compute.internal' not responding to pings. When I use 127.0.0.1 and then I run riak start then works. riak start root@ip-10-234-117-74:/etc/riak# riak ping pong I am using m1.large on ubuntu 12.04 and the latest version of riak How does one get past this point? ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak and ec2 with vm.args - Node 'r...@ip-10-234-117-74.eu-west-1.compute.internal' not responding to pings.
I am not using VPC. Is using VPC required? On Mon, Sep 16, 2013 at 12:16 AM, Jeremiah Peschka jeremiah.pesc...@gmail.com wrote: Welp, here's what I did and it seemed to work for me: Created a new VPC Created a subnet in said VPC Created a default route for SSH connectivity Created a security group with the TCP/IP settings outline in the AWS PDF Created two instances using the Riak AMI, both inside the VPC. On each instance, I edited app.config to listen on the local IP address (found using curl http://169.254.169.254/latest/meta-data/local-ipv4) On the first instance, I started riak and was able to ping the node successfully from the local host and from instance two. On the second instance I started riak and then ran the following - http://pastebin.com/CaHH3Eve Each node's vm.args is using the IP address for the `-name` parameter. If you're in a VPC, you may want to check the box for Enable DNS hostname support for instances launched in this VPC. --- Jeremiah Peschka - Founder, Brent Ozar Unlimited MCITP: SQL Server 2008, MVP Cloudera Certified Developer for Apache Hadoop On Sun, Sep 15, 2013 at 6:39 AM, David Montgomery davidmontgom...@gmail.com wrote: Its both the fqdn or internal ip address. Either waysame issue riak refuses to work with ip address or fqdnand I dont know why. on bash page the recommend using fqdn which adds to tbhe confusion http://docs.basho.com/riak/latest/ops/building/basic-cluster-setup/ *Node Names* Use fully qualified domain names (FQDNs) rather than IP addresses for the cluster member node names. For example, “r...@cluster.example.com” and “ riak@192.168.1.10” are both acceptable node naming schemes, but using the FQDN style is preferred. Once a node has been started, in order to change the name you must either remove ring files from the data directory, riak-admin reiphttp://docs.basho.com/riak/latest/ops/running/tools/riak-admin/#reipthe node, or riak-admin cluster force-replacehttp://docs.basho.com/riak/latest/ops/running/tools/riak-admin/#cluster-force-replacethe node. On Sun, Sep 15, 2013 at 9:10 PM, Jeremiah Peschka jeremiah.pesc...@gmail.com wrote: Starting on page 3, the Riak on AWS [1] whitepaper has some instructions on getting things set up. In the docs, they recommend using the AWS internal IP address instead of the FQDN. This advice is also repeated in the Riak wiki [2]. [1]: http://media.amazonwebservices.com/AWS_NoSQL_Riak.pdf [2]: http://docs.basho.com/riak/latest/ops/building/installing/aws-marketplace/ --- Jeremiah Peschka - Founder, Brent Ozar Unlimited MCITP: SQL Server 2008, MVP Cloudera Certified Developer for Apache Hadoop On Sun, Sep 15, 2013 at 1:06 AM, David Montgomery davidmontgom...@gmail.com wrote: Hi, I am having a very difficult time installing riak on ec2. To start ...I cant get past using private ip address or private dns when setting vm.args -name riak@127.0.0.1 root@ip-10-234-117-74:/home/ubuntu# hostname --fqdn ip-10-234-117-74.eu-west-1.compute.internal riak start riak ping Node 'r...@ip-10-234-117-74.eu-west-1.compute.internal' not responding to pings. When I use 127.0.0.1 and then I run riak start then works. riak start root@ip-10-234-117-74:/etc/riak# riak ping pong I am using m1.large on ubuntu 12.04 and the latest version of riak How does one get past this point? ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Does riak control work?
I am using the lastest ubuntu deb package. https://localhost:8069/admin I get page not found The connection was reset The connection to the server was reset while the page was loading. The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer's network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. Is riak running on the port? Yup root@ubuntu-VirtualBox:/etc/riak# lsof -i :8069 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME beam.smp 1376 riak 21u IPv4 3238716 0t0 TCP localhost:8069 (LISTEN) I followed the instructions here http://docs.basho.com/riak/latest/ops/advanced/riak-control/ %% https is a list of IP addresses and TCP ports that the Riak %% HTTPS interface will bind. {https, [{ 127.0.0.1, 8069 }]}, %% Set to false to disable the admin panel. {enabled, true}, So...what could be wrong? Yes..I restarted riak after I make the change to the config file. Thanks ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
[no subject]
Why do I get errors restarting riak when i use aws EIP's? I change the ip address per this doc. http://docs.basho.com/riak/1.2.1/cookbooks/Basic-Cluster-Setup/ I am using the latest version of riak for ubuntu 12.04 I change the 127.0.0.1 to the EIP. It should work. Yetriak will not work. Any issues I am missing? How do I resolve? console.log 2013-09-09 11:28:28.211 [info] 0.7.0 Application webmachine started on node 'riak@54.247.68.179' 2013-09-09 11:28:28.211 [info] 0.7.0 Application basho_stats started on node 'riak@54.247.68.179' 2013-09-09 11:28:28.229 [info] 0.7.0 Application bitcask started on node ' riak@54.247.68.179' 2013-09-09 11:28:29.385 [error] 0.172.0 CRASH REPORT Process 0.172.0 with 0 neighbours exited with reason: eaddrnotavail in gen_server:init_it/6 line 320 2013-09-09 11:28:29.385 [error] 0.138.0 Supervisor riak_core_sup had child http_54.247.68.179:8098 started with webmachine_mochiweb:start([{name,http_54.247.68.179:8098},{ip,54.247.68.179},{p$ 2013-09-09 11:28:29.387 [info] 0.7.0 Application riak_core exited with reason: {shutdown,{riak_core_app,start,[normal,[]]}} error.log 013-09-09 11:08:13.109 [error] 0.138.0 Supervisor riak_core_sup had child riak_core_capability started with riak_core_capability:start_link() at 0.156.0 exit with reason no function clause match$ 2013-09-09 11:08:13.110 [error] 0.136.0 CRASH REPORT Process 0.136.0 with 0 neighbours exited with reason: {{function_clause,[{orddict,fetch,[' riak@10.239.130.225',[{'riak@127.0.0.1',[{{riak_cont$ 2013-09-09 11:08:34.956 [error] 0.156.0 gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch(' riak@10.239.130.225', [{'riak@127.0.0.1',[{{riak_control,m$ 2013-09-09 11:08:34.957 [error] 0.156.0 CRASH REPORT Process riak_core_capability with 0 neighbours exited with reason: no function clause matching orddict:fetch('riak@10.239.130.225', [{'riak@127.$ 2013-09-09 11:08:34.957 [error] 0.140.0 Supervisor riak_core_sup had child riak_core_capability started with riak_core_capability:start_link() at 0.156.0 exit with reason no function clause match$ 2013-09-09 11:08:34.958 [error] 0.138.0 CRASH REPORT Process 0.138.0 with 0 neighbours exited with reason: {{function_clause,[{orddict,fetch,[' riak@10.239.130.225',[{'riak@127.0.0.1',[{{riak_cont$ 2013-09-09 11:10:56.863 [error] 0.154.0 gen_server riak_core_capability terminated with reason: no function clause matching orddict:fetch(' riak@54.247.68.179', []) line 72 2013-09-09 11:10:56.864 [error] 0.154.0 CRASH REPORT Process riak_core_capability with 0 neighbours exited with reason: no function clause matching orddict:fetch('riak@54.247.68.179', []) line 72 i$ [ Read 32 lines ] ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak does not work on ec2
Hi, 1) The only way I could get riak to start is if I use a private IP address. Using a public address does not work. Why? 2) Now...great that I can riak working but how do I query from from a remote machine? Reading this did not help http://docs.basho.com/riak/latest/cookbooks/Performance-Tuning-AWS/#Dealing-with-IP-addresses I ran a 5 node cluster on datapipe without issues. Going to aws on ec2 ..I am having issues with the private IP addess. I am not using a VPC and dont want to. Is a VPC required if using ec2? Yes..I understand the merits of private IP address on ec2 and well as a VPC but for this use case...I simply want write from outside of aws and run map reduce debug testing. In am.args If I replace the private IP address with public. Does not work ## Name of the riak node -name riak@10.152.159.166 In app.configIf I replace the private IP address with public. Does not work %% pb_ip is the IP address that the Riak Protocol Buffers interface %% will bind to. If this is undefined, the interface will not run. {pb_ip, 10.152.159.166 }, %% http is a list of IP addresses and TCP ports that the Riak %% HTTP interface will bind. {http, [ {10.152.159.166, 8098 } ]}, %% https is a list of IP addresses and TCP ports that the Riak %% HTTPS interface will bind. %{https, [{ 10.152.159.166, 8098 }]}, I have every port open and have group to group access but still no closer on how to query from outside What am I missing? Thanks On Fri, Jun 21, 2013 at 1:19 PM, Tom Santero tsant...@basho.com wrote: Hi David, Sorry to hear you're having difficulties. From what I can tell, you're simply looking to spin up a new cluster, correct? If so, riak-admin cluster replace is not the command you want. If you're attempting to rename your riak nodes and bind them to an AWS EIP, you can follow these instructions [0] from the docs. Cheers, Tom [0] http://docs.basho.com/riak/latest/cookbooks/Performance-Tuning-AWS/#Dealing-with-IP-addresses On Fri, Jun 21, 2013 at 4:55 AM, David Montgomery davidmontgom...@gmail.com wrote: Hi, I am trying to get riak to work on ubuntu 12.04 and is proving not to be a friendly install on ec2. 1) I am using riak_1.3.1-1_amd64.deb and have libssl0.9.8 installed 2) I stop riak then run riak-admin cluster replace riak@127.0.0.1 r...@xxx.xxx.xxx.xxx riak-admin cluster plan riak-admin cluster commit Attempting to restart script through sudo -H -u riak Node is not running! At the very bottom are the config files. I change all of 127.0.0.1 to the public ip addres of the machine on ec2. Kinda hard to mess up there I am using chef. node[:ec2][:public_ipv4] 3) The logs are not being very helpful. What does the below mean in english? root@domU-12-31-39-0C-59-1D:/home/ubuntu# riak console Attempting to restart script through sudo -H -u riak Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot /usr/lib/riak/releases/1.3.1/riak -embedded -config /etc/riak/app.config -pa /usr/lib/riak/lib/basho-patches -args_file /etc/riak/vm.args -- console Root: /usr/lib/riak Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:1:1] [async-threads:64] [kernel-poll:true] /usr/lib/riak/lib/os_mon-2.2.9/priv/bin/memsup: Erlang has closed. Erlang has closed {Kernel pid terminated,application_controller,{application_start_failure,riak_core,{shutdown,{riak_core_app,start,[normal,[]] Crash dump was written to: /var/log/riak/erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,riak_core,{shutdown,{riak_core_app,start,[normal,[]]}}}) So..what could the issue be? Is there a missing manual? What did I miss from the documentation on http://docs.basho.com/riak/latest/cookbooks/Basic-Cluster-Setup/ %% Riak Client APIs config {riak_api, [ %% pb_backlog is the maximum length to which the queue of pending %% connections may grow. If set, it must be an integer = 0. %% By default the value is 5. If you anticipate a huge number of %% connections being initialised *simultaneously*, set this number %% higher. %% {pb_backlog, 64}, %% pb_ip is the IP address that the Riak Protocol Buffers interface %% will bind to. If this is undefined, the interface will not run. {pb_ip, xxx.xxx.xxx.xxx }, %% pb_port is the TCP port that the Riak Protocol Buffers interface %% will bind to {pb_port, 8087 } ]}, %% Riak Core config {riak_core, [ %% Default location of ringstate {ring_state_dir, /var/lib/riak/ring}, %% Default ring creation size. Make sure it is a power
Re: Riak does not work on ec2
Ok..yes..using a EIP worked. Thanks and will now up my understanding of NAT Thanks On Sat, Jun 22, 2013 at 12:46 PM, Andrew Thompson and...@hijacked.uswrote: By default AWS machines use NAT, they provide a temporary external IP that is NATed through to the internal IP applied to the AWS instance. You can also permanantly assign an 'elastic IP address' if you want a particular instance to always be reachable on a specific IP. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html When you put that public IP in the riak config file, it makes no sense to Riak because that IP isn't applied to any interface it knows about. When you put the private IP in, it works, but you can't access it. This is probably because AWS is dropping inbound traffic by default: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#default-security-group This is not a Riak problem, it is an issue with configuring AWS. Andrew ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Riak does not work on ec2
Hi, I am trying to get riak to work on ubuntu 12.04 and is proving not to be a friendly install on ec2. 1) I am using riak_1.3.1-1_amd64.deb and have libssl0.9.8 installed 2) I stop riak then run riak-admin cluster replace riak@127.0.0.1 r...@xxx.xxx.xxx.xxx riak-admin cluster plan riak-admin cluster commit Attempting to restart script through sudo -H -u riak Node is not running! At the very bottom are the config files. I change all of 127.0.0.1 to the public ip addres of the machine on ec2. Kinda hard to mess up there I am using chef. node[:ec2][:public_ipv4] 3) The logs are not being very helpful. What does the below mean in english? root@domU-12-31-39-0C-59-1D:/home/ubuntu# riak console Attempting to restart script through sudo -H -u riak Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot /usr/lib/riak/releases/1.3.1/riak -embedded -config /etc/riak/app.config -pa /usr/lib/riak/lib/basho-patches -args_file /etc/riak/vm.args -- console Root: /usr/lib/riak Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:1:1] [async-threads:64] [kernel-poll:true] /usr/lib/riak/lib/os_mon-2.2.9/priv/bin/memsup: Erlang has closed. Erlang has closed {Kernel pid terminated,application_controller,{application_start_failure,riak_core,{shutdown,{riak_core_app,start,[normal,[]] Crash dump was written to: /var/log/riak/erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,riak_core,{shutdown,{riak_core_app,start,[normal,[]]}}}) So..what could the issue be? Is there a missing manual? What did I miss from the documentation on http://docs.basho.com/riak/latest/cookbooks/Basic-Cluster-Setup/ %% Riak Client APIs config {riak_api, [ %% pb_backlog is the maximum length to which the queue of pending %% connections may grow. If set, it must be an integer = 0. %% By default the value is 5. If you anticipate a huge number of %% connections being initialised *simultaneously*, set this number %% higher. %% {pb_backlog, 64}, %% pb_ip is the IP address that the Riak Protocol Buffers interface %% will bind to. If this is undefined, the interface will not run. {pb_ip, xxx.xxx.xxx.xxx }, %% pb_port is the TCP port that the Riak Protocol Buffers interface %% will bind to {pb_port, 8087 } ]}, %% Riak Core config {riak_core, [ %% Default location of ringstate {ring_state_dir, /var/lib/riak/ring}, %% Default ring creation size. Make sure it is a power of 2, %% e.g. 16, 32, 64, 128, 256, 512 etc %{ring_creation_size, 64}, %% http is a list of IP addresses and TCP ports that the Riak %% HTTP interface will bind. {http, [ {xxx.xxx.xxx.xxx, 8098 } ]}, %% https is a list of IP addresses and TCP ports that the Riak %% HTTPS interface will bind. %{https, [{ xxx.xxx.xxx.xxx, 8098 }]}, ## Name of the riak node -name r...@xxx.xxx.xxx.xxx ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Riak will not work remote from ec2
Hi, Why will riak not work on ec2 with a single node? 1) I have my sec groups open for all ports. This should be a no brainier. 2) Using the python client = riak.RiakClient(host=riak_host,port=8087,transport_class=riak.transports.pbc.RiakPbcTransport) where riak_host is the public ip address of the machine 3) I when into app.config vm.args and replaces 127.0.0.1 in the relevant places with the public ip address 4) I restarted riak 5) Yet..I cant remote write to riak. I am using the latest version of rial for ubuntu 12.04 installed wit the deb package. So...what is the issue? Thanks ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak will not work remote from ec2
Ps Wen i did riak-admin reip riak@127.0.0.1 r...@xxx.xxx.xxx.xxx Attempting to restart script through sudo -H -u riak Backed up existing ring file to /var/lib/riak/ring/riak_core_ring.default.20130614011624.BAK New ring file written to /var/lib/riak/ring/riak_core_ring.default.20130614012331 riak ping Attempting to restart script through sudo -H -u riak WARNING: ulimit -n is 1024; 4096 is the recommended minimum. Node 'r...@xxx.xxx.xxx.xxx' not responding to pings. riak start Attempting to restart script through sudo -H -u riak WARNING: ulimit -n is 1024; 4096 is the recommended minimum. Riak failed to start within 15 seconds, see the output of 'riak console' for more information. If you want to wait longer, set the environment variable WAIT_FOR_ERLANG to the number of seconds to wait. Where is the missing documentation? On Fri, Jun 14, 2013 at 9:02 AM, David Montgomery davidmontgom...@gmail.com wrote: Hi, Why will riak not work on ec2 with a single node? 1) I have my sec groups open for all ports. This should be a no brainier. 2) Using the python client = riak.RiakClient(host=riak_host,port=8087,transport_class=riak.transports.pbc.RiakPbcTransport) where riak_host is the public ip address of the machine 3) I when into app.config vm.args and replaces 127.0.0.1 in the relevant places with the public ip address 4) I restarted riak 5) Yet..I cant remote write to riak. I am using the latest version of rial for ubuntu 12.04 installed wit the deb package. So...what is the issue? Thanks ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Riak will not work remote from ec2
The below does to make sense to me. root@ip-10-204-150-22:/var/log/riak# riak console Attempting to restart script through sudo -H -u riak WARNING: ulimit -n is 1024; 4096 is the recommended minimum. Exec: /usr/lib/riak/erts-5.9.1/bin/erlexec -boot /usr/lib/riak/releases/1.3.1/riak -embedded -config /etc/riak/app.config -pa /usr/lib/riak/lib/basho-patches -args_file /etc/riak/vm.args -- console Root: /usr/lib/riak {error_logger,{{2013,6,14},{1,49,23}},Protocol: ~p: register error: ~p~n,[inet_tcp,{{badmatch,{error,duplicate_name}},[{inet_tcp_dist,listen,1,[{file,inet_tcp_dist.erl},{line,70}]},{net_kernel,start_protos,4,[{file,net_kernel.erl},{line,1314}]},{net_kernel,start_protos,3,[{file,net_kernel.erl},{line,1307}]},{net_kernel,init_node,2,[{file,net_kernel.erl},{line,1197}]},{net_kernel,init,1,[{file,net_kernel.erl},{line,357}]},{gen_server,init_it,6,[{file,gen_server.erl},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,proc_lib.erl},{line,227}]}]}]} {error_logger,{{2013,6,14},{1,49,23}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,0.21.0},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6,[{file,gen_server.erl},{line,320}]},{proc_lib,init_p_do_apply,3,[{file,proc_lib.erl},{line,227}]}]}},{ancestors,[net_sup,kernel_sup,0.10.0]},{messages,[]},{links,[#Port0.197,0.18.0]},{dictionary,[{longnames,true}]},{trap_exit,true},{status,running},{heap_size,610},{stack_size,24},{reductions,528}],[]]} {error_logger,{{2013,6,14},{1,49,23}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfargs,{net_kernel,start_link,[[' riak@184.73.64.170 ',longnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2013,6,14},{1,49,23}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfargs,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2013,6,14},{1,49,23}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {Kernel pid terminated,application_controller,{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]] Crash dump was written to: /var/log/riak/erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) On Fri, Jun 14, 2013 at 9:26 AM, David Montgomery davidmontgom...@gmail.com wrote: Ps Wen i did riak-admin reip riak@127.0.0.1 r...@xxx.xxx.xxx.xxx Attempting to restart script through sudo -H -u riak Backed up existing ring file to /var/lib/riak/ring/riak_core_ring.default.20130614011624.BAK New ring file written to /var/lib/riak/ring/riak_core_ring.default.20130614012331 riak ping Attempting to restart script through sudo -H -u riak WARNING: ulimit -n is 1024; 4096 is the recommended minimum. Node 'r...@xxx.xxx.xxx.xxx' not responding to pings. riak start Attempting to restart script through sudo -H -u riak WARNING: ulimit -n is 1024; 4096 is the recommended minimum. Riak failed to start within 15 seconds, see the output of 'riak console' for more information. If you want to wait longer, set the environment variable WAIT_FOR_ERLANG to the number of seconds to wait. Where is the missing documentation? On Fri, Jun 14, 2013 at 9:02 AM, David Montgomery davidmontgom...@gmail.com wrote: Hi, Why will riak not work on ec2 with a single node? 1) I have my sec groups open for all ports. This should be a no brainier. 2) Using the python client = riak.RiakClient(host=riak_host,port=8087,transport_class=riak.transports.pbc.RiakPbcTransport) where riak_host is the public ip address of the machine 3) I when into app.config vm.args and replaces 127.0.0.1 in the relevant places with the public ip address 4) I restarted riak 5) Yet..I cant remote write to riak. I am using the latest version of rial for ubuntu 12.04 installed wit the deb package. So...what is the issue? Thanks ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Exception: {phase:0,error:[timeout]
Yeah...the update using the ubuntu package worked! Whew! scary See Ya On Wed, Jan 30, 2013 at 6:07 PM, David Montgomery davidmontgom...@gmail.com wrote: If I build from source will that solve the problem? Elsehow can i get the data out so I can use hadoop? I am using ubuntu. Do I just build on new machines and point my volume attachment where data is stored to the new nodes? On Fri, Jan 25, 2013 at 10:04 AM, Matt Black matt.bl...@jbadigital.comwrote: Hi David, This is a known error that has been resolved in trunk, but is yet to be released: https://github.com/basho/riak_kv/issues/290 Any word on a release date, Basho guys? On 25 January 2013 12:52, David Montgomery davidmontgom...@gmail.comwrote: Hi, Why will riak throw this type of exception when I have a timeout of TIMEOUT=300? What does this error mean? How of now I cant get data out of a production system. 'Traceback (most recent call last): File gg.py, line 324, in module results = getDomainReportIndex(bucket=bucket,date_start=int(utc_start_date),date_end=int(utc_end_date)) File gg.py, line 137, in getDomainReportIndex for result in query.run(timeout=TIMEOUT): File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/mapreduce.py, line 234, in run result = t.mapred(self._inputs, query, timeout) File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/transports/pbc.py, line 454, in mapred _handle_response) File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/transports/pbc.py, line 548, in send_msg_multi msg_code, resp = self.recv_msg(conn, expect) File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/transports/pbc.py, line 589, in recv_msg raise Exception(msg.errmsg) Exception: {phase:0,error:[timeout],input:{\impressions\,\322c0473-9eeb-4c9c-81fe-4e898bc50416:cid5989410021:agid7744464312:2012122316:SG\},type:forward_preflist,stack:[]} ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Exception: {phase:0,error:[timeout]
If I build from source will that solve the problem? Elsehow can i get the data out so I can use hadoop? I am using ubuntu. Do I just build on new machines and point my volume attachment where data is stored to the new nodes? On Fri, Jan 25, 2013 at 10:04 AM, Matt Black matt.bl...@jbadigital.comwrote: Hi David, This is a known error that has been resolved in trunk, but is yet to be released: https://github.com/basho/riak_kv/issues/290 Any word on a release date, Basho guys? On 25 January 2013 12:52, David Montgomery davidmontgom...@gmail.comwrote: Hi, Why will riak throw this type of exception when I have a timeout of TIMEOUT=300? What does this error mean? How of now I cant get data out of a production system. 'Traceback (most recent call last): File gg.py, line 324, in module results = getDomainReportIndex(bucket=bucket,date_start=int(utc_start_date),date_end=int(utc_end_date)) File gg.py, line 137, in getDomainReportIndex for result in query.run(timeout=TIMEOUT): File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/mapreduce.py, line 234, in run result = t.mapred(self._inputs, query, timeout) File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/transports/pbc.py, line 454, in mapred _handle_response) File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/transports/pbc.py, line 548, in send_msg_multi msg_code, resp = self.recv_msg(conn, expect) File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/transports/pbc.py, line 589, in recv_msg raise Exception(msg.errmsg) Exception: {phase:0,error:[timeout],input:{\impressions\,\322c0473-9eeb-4c9c-81fe-4e898bc50416:cid5989410021:agid7744464312:2012122316:SG\},type:forward_preflist,stack:[]} ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Exception: {phase:0,error:[timeout]
For basho...how to i resolve and how can i get my data out I can use hadoop. What is the impact to my system or how can I measure the impact of this very large bug? On Fri, Jan 25, 2013 at 10:04 AM, Matt Black matt.bl...@jbadigital.comwrote: Hi David, This is a known error that has been resolved in trunk, but is yet to be released: https://github.com/basho/riak_kv/issues/290 Any word on a release date, Basho guys? On 25 January 2013 12:52, David Montgomery davidmontgom...@gmail.comwrote: Hi, Why will riak throw this type of exception when I have a timeout of TIMEOUT=300? What does this error mean? How of now I cant get data out of a production system. 'Traceback (most recent call last): File gg.py, line 324, in module results = getDomainReportIndex(bucket=bucket,date_start=int(utc_start_date),date_end=int(utc_end_date)) File gg.py, line 137, in getDomainReportIndex for result in query.run(timeout=TIMEOUT): File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/mapreduce.py, line 234, in run result = t.mapred(self._inputs, query, timeout) File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/transports/pbc.py, line 454, in mapred _handle_response) File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/transports/pbc.py, line 548, in send_msg_multi msg_code, resp = self.recv_msg(conn, expect) File /usr/local/lib/python2.7/dist-packages/riak-1.5.1-py2.7.egg/riak/transports/pbc.py, line 589, in recv_msg raise Exception(msg.errmsg) Exception: {phase:0,error:[timeout],input:{\impressions\,\322c0473-9eeb-4c9c-81fe-4e898bc50416:cid5989410021:agid7744464312:2012122316:SG\},type:forward_preflist,stack:[]} ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Same MR query, different results every run...........
Hi, i do have a reduce phase On Tue, Jan 8, 2013 at 12:08 AM, Mridul Kashatria mri...@readwhere.comwrote: Hi, If I am correct, adding a reduce function should return the same number of items. I'm a riak noob but I faced some similar issue while testing with a map function only. Adding a reduce fixed it. I believe as the map fans out to multiple nodes, whichever node returns data first is written to output and not collected by a reduce stage. Please correct me if I'm wrong. Thanks -- Mridul On Sunday 06 January 2013 11:07 AM, David Montgomery wrote: Hi, Here is my my mapper... query.map(''' function(value, keyData, arg) { if(value.length == 0){ return []; }else{ var data = Riak.mapValuesJson(value)[0]; var obj = {}; if(data['campaign_id']=='%s'){ try{ var alt_key = data['ckid'] + '||' + data['gid'] + '||' + data['ts_hms']; } catch(err){ var alt_key = 'error'; } obj[alt_key] = 1; return [ obj ]; }else{ return []; } } }''' % campaign_id) When I run the the query repeatedly, over and over, about every 2 seconds I get the below. A few times I get 14 rows and a few times I get 13 then back to 14 etc. So.why? There should be no variation. I have a three node cluster, two cores, 4 gigs or ram on ubuntu 12.06 using the latest riak. TOTAL: 14 3dc3f58f-faea-4751-94b5-8a9a076d4b3f||CAESEGYMM1Q34DV8Ev0i12IVKdY||2012-12-31 08:36:21 1 b4d82fa0-5cd4-4813-a150-554ebca30f1f||CAESEM98NHldIIyAzY0CIUnKudw||2013-01-04 06:18:37 1 8743af22-a664-4b60-ac59-b79d52c12e9e||CAESEH2PIdEYXvk3Dsg2_vF6Qcc||2013-01-04 09:13:30 1 cef36621-527c-4b7a-be6f-5842e13a1350||CAESEHsyPPSizUsT-j31I-nCLzQ||2013-01-05 12:50:22 1 663fb22d-c60d-46b7-8b5b-c9be103c2084||CAESEDtHYmtttm7DBCRpCSU9zYE||2013-01-04 08:55:06 1 e2b6afda-b838-48d5-a449-7b568b9f6b04||CAESEBciJaIqccs2584wIgdsOqc||2013-01-04 04:02:13 1 66aa05fe-9c55-43b2-93ae-c8cb19d097d7||CAESEBuVyK-X_iNGaiiLhPsT0TE||2013-01-02 01:29:38 1 0969a7ca-4324-4118-9038-b6fc11f08a36||CAESENwCD1bw1VvtIamGBCUl_zk||2013-01-02 00:55:01 1 f78b77f6-a08c-4f07-b982-7b2cdcefba4f||CAESEJiWNlcbRN7Sx9o2FB7fbaU||2012-12-29 05:22:46 1 8050e5a7-1583-459a-983f-55feaf0e2a6c||CAESED2NyW9XDEbiKb1UD4sTzvI||2013-01-05 12:18:59 1 58b84566-ad3a-4a3f-91bd-1c61986fbadb||CAESELQcGkigDvXrtRDgOlw9rX0||2013-01-04 16:19:25 1 0db77e8d-ed94-43cf-8860-b4e43dfa24aa||CAESECbwN7VY6o8om79mZ905GIA||2013-01-02 16:15:34 1 67e79552-7e06-44bd-9e95-87f7cb634de3||CAESEFA6fd_C1PBslKgOj6_BI28||2012-12-29 05:23:11 1 ffc3c6ae-beee-4dfe-b41d-ec3a72bddf67||CAESEN_MAXs55jCPIwuyvfTZIZc||2012-12-28 07:56:03 1 TOTAL: 13 b4d82fa0-5cd4-4813-a150-554ebca30f1f||CAESEM98NHldIIyAzY0CIUnKudw||2013-01-04 06:18:37 1 8743af22-a664-4b60-ac59-b79d52c12e9e||CAESEH2PIdEYXvk3Dsg2_vF6Qcc||2013-01-04 09:13:30 1 cef36621-527c-4b7a-be6f-5842e13a1350||CAESEHsyPPSizUsT-j31I-nCLzQ||2013-01-05 12:50:22 1 663fb22d-c60d-46b7-8b5b-c9be103c2084||CAESEDtHYmtttm7DBCRpCSU9zYE||2013-01-04 08:55:06 1 e2b6afda-b838-48d5-a449-7b568b9f6b04||CAESEBciJaIqccs2584wIgdsOqc||2013-01-04 04:02:13 1 66aa05fe-9c55-43b2-93ae-c8cb19d097d7||CAESEBuVyK-X_iNGaiiLhPsT0TE||2013-01-02 01:29:38 1 0969a7ca-4324-4118-9038-b6fc11f08a36||CAESENwCD1bw1VvtIamGBCUl_zk||2013-01-02 00:55:01 1 f78b77f6-a08c-4f07-b982-7b2cdcefba4f||CAESEJiWNlcbRN7Sx9o2FB7fbaU||2012-12-29 05:22:46 1 8050e5a7-1583-459a-983f-55feaf0e2a6c||CAESED2NyW9XDEbiKb1UD4sTzvI||2013-01-05 12:18:59 1 58b84566-ad3a-4a3f-91bd-1c61986fbadb||CAESELQcGkigDvXrtRDgOlw9rX0||2013-01-04 16:19:25 1 3dc3f58f-faea-4751-94b5-8a9a076d4b3f||CAESEGYMM1Q34DV8Ev0i12IVKdY||2012-12-31 08:36:21 1 67e79552-7e06-44bd-9e95-87f7cb634de3||CAESEFA6fd_C1PBslKgOj6_BI28||2012-12-29 05:23:11 1 0db77e8d-ed94-43cf-8860-b4e43dfa24aa||CAESECbwN7VY6o8om79mZ905GIA||2013-01-02 16:15:34 1 ___ riak-users mailing listriak-users@lists.basho.comhttp://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Same MR query, different results every run...........
Hi, Here is my my mapper... query.map(''' function(value, keyData, arg) { if(value.length == 0){ return []; }else{ var data = Riak.mapValuesJson(value)[0]; var obj = {}; if(data['campaign_id']=='%s'){ try{ var alt_key = data['ckid'] + '||' + data['gid'] + '||' + data['ts_hms']; } catch(err){ var alt_key = 'error'; } obj[alt_key] = 1; return [ obj ]; }else{ return []; } } }''' % campaign_id) When I run the the query repeatedly, over and over, about every 2 seconds I get the below. A few times I get 14 rows and a few times I get 13 then back to 14 etc. So.why? There should be no variation. I have a three node cluster, two cores, 4 gigs or ram on ubuntu 12.06 using the latest riak. TOTAL: 14 3dc3f58f-faea-4751-94b5-8a9a076d4b3f||CAESEGYMM1Q34DV8Ev0i12IVKdY||2012-12-31 08:36:21 1 b4d82fa0-5cd4-4813-a150-554ebca30f1f||CAESEM98NHldIIyAzY0CIUnKudw||2013-01-04 06:18:37 1 8743af22-a664-4b60-ac59-b79d52c12e9e||CAESEH2PIdEYXvk3Dsg2_vF6Qcc||2013-01-04 09:13:30 1 cef36621-527c-4b7a-be6f-5842e13a1350||CAESEHsyPPSizUsT-j31I-nCLzQ||2013-01-05 12:50:22 1 663fb22d-c60d-46b7-8b5b-c9be103c2084||CAESEDtHYmtttm7DBCRpCSU9zYE||2013-01-04 08:55:06 1 e2b6afda-b838-48d5-a449-7b568b9f6b04||CAESEBciJaIqccs2584wIgdsOqc||2013-01-04 04:02:13 1 66aa05fe-9c55-43b2-93ae-c8cb19d097d7||CAESEBuVyK-X_iNGaiiLhPsT0TE||2013-01-02 01:29:38 1 0969a7ca-4324-4118-9038-b6fc11f08a36||CAESENwCD1bw1VvtIamGBCUl_zk||2013-01-02 00:55:01 1 f78b77f6-a08c-4f07-b982-7b2cdcefba4f||CAESEJiWNlcbRN7Sx9o2FB7fbaU||2012-12-29 05:22:46 1 8050e5a7-1583-459a-983f-55feaf0e2a6c||CAESED2NyW9XDEbiKb1UD4sTzvI||2013-01-05 12:18:59 1 58b84566-ad3a-4a3f-91bd-1c61986fbadb||CAESELQcGkigDvXrtRDgOlw9rX0||2013-01-04 16:19:25 1 0db77e8d-ed94-43cf-8860-b4e43dfa24aa||CAESECbwN7VY6o8om79mZ905GIA||2013-01-02 16:15:34 1 67e79552-7e06-44bd-9e95-87f7cb634de3||CAESEFA6fd_C1PBslKgOj6_BI28||2012-12-29 05:23:11 1 ffc3c6ae-beee-4dfe-b41d-ec3a72bddf67||CAESEN_MAXs55jCPIwuyvfTZIZc||2012-12-28 07:56:03 1 TOTAL: 13 b4d82fa0-5cd4-4813-a150-554ebca30f1f||CAESEM98NHldIIyAzY0CIUnKudw||2013-01-04 06:18:37 1 8743af22-a664-4b60-ac59-b79d52c12e9e||CAESEH2PIdEYXvk3Dsg2_vF6Qcc||2013-01-04 09:13:30 1 cef36621-527c-4b7a-be6f-5842e13a1350||CAESEHsyPPSizUsT-j31I-nCLzQ||2013-01-05 12:50:22 1 663fb22d-c60d-46b7-8b5b-c9be103c2084||CAESEDtHYmtttm7DBCRpCSU9zYE||2013-01-04 08:55:06 1 e2b6afda-b838-48d5-a449-7b568b9f6b04||CAESEBciJaIqccs2584wIgdsOqc||2013-01-04 04:02:13 1 66aa05fe-9c55-43b2-93ae-c8cb19d097d7||CAESEBuVyK-X_iNGaiiLhPsT0TE||2013-01-02 01:29:38 1 0969a7ca-4324-4118-9038-b6fc11f08a36||CAESENwCD1bw1VvtIamGBCUl_zk||2013-01-02 00:55:01 1 f78b77f6-a08c-4f07-b982-7b2cdcefba4f||CAESEJiWNlcbRN7Sx9o2FB7fbaU||2012-12-29 05:22:46 1 8050e5a7-1583-459a-983f-55feaf0e2a6c||CAESED2NyW9XDEbiKb1UD4sTzvI||2013-01-05 12:18:59 1 58b84566-ad3a-4a3f-91bd-1c61986fbadb||CAESELQcGkigDvXrtRDgOlw9rX0||2013-01-04 16:19:25 1 3dc3f58f-faea-4751-94b5-8a9a076d4b3f||CAESEGYMM1Q34DV8Ev0i12IVKdY||2012-12-31 08:36:21 1 67e79552-7e06-44bd-9e95-87f7cb634de3||CAESEFA6fd_C1PBslKgOj6_BI28||2012-12-29 05:23:11 1 0db77e8d-ed94-43cf-8860-b4e43dfa24aa||CAESECbwN7VY6o8om79mZ905GIA||2013-01-02 16:15:34 1 ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Keyfilters. I cant get date ranges to work using key filters
OMG..now I get how to use 2i in map reduce. For the life of me I could never get it to click in my brain. Anywayit worked. Very cool. Nonetheless I am still curious about why using key filters would not work. Thanks On Mon, Dec 24, 2012 at 8:57 PM, Sean Cribbs s...@basho.com wrote: David, Perhaps key filters are the wrong approach. Are you using the LevelDB backend? If so, then a secondary index lookup on the special '$key' index is what you need. I can't tell from your examples above what language/client you're using but here's how the raw inputs would look in the JSON: { bucket:yourbucket, // change this to your bucket name index:$key, start:2012123000, // beginning of your range end:2012122323 // end of your range } Not only is this simpler to express, but it also limits the range of scans done in the backend, meaning your queries shouldn't take as long to complete. On Mon, Dec 24, 2012 at 2:16 AM, David Montgomery davidmontgom...@gmail.com wrote: Thanks, If I use a logical I get no data. If I just use a greater than than it works. date_start = '2012122300' date_end = '2012122323' filters = key_filter.tokenize(:, filter_map['date']).greater_than_eq(date_start) key_filter.tokenize(:, filter_map['date']).less_than_eq(date_end) query.add_key_filters(filters) filters = key_filter.tokenize(:, 4) + (key_filter.string_to_int().greater_than_eq(date_start)) query.add_key_filters(filters) I event tried between filters = key_filter.tokenize(:, 4) + (key_filter.between(date_start,date_end)) query.add_key_filters(filters) print filters These are the results for one day. I am really at a loss as to why I cant get riak to work with what should be very simple logical conditions cid5989410021||null||2012122314 1 cid5989410021||null||2012122306 1 cid5989410021||www.sonems.net||2012122305 1 cid5989410021||www.ke5ter.com||2012122406 1 cid5989410021||mobile.brothersoft.com||2012122315 1 cid5989410021||www.renotalk.com||2012122315 1 query.map(''' function(value, keyData, arg) { if(value.length == 0){ return []; }else{ var data = Riak.mapValuesJson(value)[0]; var obj = {}; var xs = value.key.split(':'); var dt = xs[3]; if(data['adx']=='gdn'){ try{ var matches = data['url'].match(/^https?\:\/\/([^\/?#]+)(?:[\/?#]|$)/i); var domain = matches matches[1]; var alt_key = data['campaign_id'] + '||' + domain + '||' + dt; } catch(err){ var alt_key = 'error'; } var obj = {}; obj[alt_key] = 1; return [ obj ]; }else{ return []; } } }''') reducer = function(values, arg){ if(values.length == 0){ return [{}] } return [ values.reduce( function(acc, item) { for (var state in item) { if (acc[state]) acc[state] += item[state]; else acc[state] = item[state]; } return acc; })]; } On Mon, Dec 24, 2012 at 7:19 AM, Evan Vigil-McClanahan emcclana...@basho.com wrote: It looks to me like your error is here: filters = key_filter.tokenize(:, 4) + (key_filter.starts_with('20121223') and key_filter.string_to_int().less_than(2012122423)) The 'and' there is getting interpreted as a logical and: key_filter.starts_with('20121223') and key_filter.string_to_int().less_than(2012122423) [['string_to_int'], ['less_than', 2012122423]] You have to use the sadly non-idiomatic '' to get it to do what you're trying to do: key_filter.tokenize(:, 4) + (key_filter.starts_with('20121223') key_filter.string_to_int().less_than(2012122423)) [['tokenize', ':', 4], ['and', [['starts_with', '20121223']], [['string_to_int'], ['less_than', 2012122423 ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com -- Sean Cribbs s...@basho.com Software Engineer Basho Technologies, Inc. http://basho.com/ ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: Keyfilters. I cant get date ranges to work using key filters
Thanks, If I use a logical I get no data. If I just use a greater than than it works. date_start = '2012122300' date_end = '2012122323' filters = key_filter.tokenize(:, filter_map['date']).greater_than_eq(date_start) key_filter.tokenize(:, filter_map['date']).less_than_eq(date_end) query.add_key_filters(filters) filters = key_filter.tokenize(:, 4) + (key_filter.string_to_int().greater_than_eq(date_start)) query.add_key_filters(filters) I event tried between filters = key_filter.tokenize(:, 4) + (key_filter.between(date_start,date_end)) query.add_key_filters(filters) print filters These are the results for one day. I am really at a loss as to why I cant get riak to work with what should be very simple logical conditions cid5989410021||null||2012122314 1 cid5989410021||null||2012122306 1 cid5989410021||www.sonems.net||2012122305 1 cid5989410021||www.ke5ter.com||2012122406 1 cid5989410021||mobile.brothersoft.com||2012122315 1 cid5989410021||www.renotalk.com||2012122315 1 query.map(''' function(value, keyData, arg) { if(value.length == 0){ return []; }else{ var data = Riak.mapValuesJson(value)[0]; var obj = {}; var xs = value.key.split(':'); var dt = xs[3]; if(data['adx']=='gdn'){ try{ var matches = data['url'].match(/^https?\:\/\/([^\/?#]+)(?:[\/?#]|$)/i); var domain = matches matches[1]; var alt_key = data['campaign_id'] + '||' + domain + '||' + dt; } catch(err){ var alt_key = 'error'; } var obj = {}; obj[alt_key] = 1; return [ obj ]; }else{ return []; } } }''') reducer = function(values, arg){ if(values.length == 0){ return [{}] } return [ values.reduce( function(acc, item) { for (var state in item) { if (acc[state]) acc[state] += item[state]; else acc[state] = item[state]; } return acc; })]; } On Mon, Dec 24, 2012 at 7:19 AM, Evan Vigil-McClanahan emcclana...@basho.com wrote: It looks to me like your error is here: filters = key_filter.tokenize(:, 4) + (key_filter.starts_with('20121223') and key_filter.string_to_int().less_than (2012122423)) The 'and' there is getting interpreted as a logical and: key_filter.starts_with('20121223') and key_filter.string_to_int().less_than(2012122423) [['string_to_int'], ['less_than', 2012122423]] You have to use the sadly non-idiomatic '' to get it to do what you're trying to do: key_filter.tokenize(:, 4) + (key_filter.starts_with('20121223') key_filter.string_to_int().less_than(2012122423)) [['tokenize', ':', 4], ['and', [['starts_with', '20121223']], [['string_to_int'], ['less_than', 2012122423 ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: CRASH REPORT Process TypeError: reduce of empty array with no initial value},{source,unknown}]}
Hi, Thanks.. in my mapper I added obj={} which I left out. But still cant get MR to work but left out that i am using filters. I get the same error and am wondering if I am using key filters correct. {'uuid':1,'campaign_id':2,'adgroup_id':3,'date':4,'country':5} filters = key_filter.tokenize(:,5).eq(country) \ key_filter.tokenize(:, 2).eq(campaign_id) \ key_filter.tokenize(:, 4).string_to_int().less_than_eq(date_end) \ key_filter.tokenize(:, 4).string_to_int().greater_than_eq(date_start) query.add_key_filters(filters) '[['and', [['tokenize', ':', 5], ['eq', 'SG']], [['tokenize', ':', 2], ['eq', 'cid5989410021']], [['tokenize', ':', 4], ['string_to_int'], ['less_than_eq', '2012122509']], [['tokenize', ':', 4], ['string_to_int'], ['greater_than_eq', '2012121810' If I leave out key_filter.tokenize(:, filter_map['date']).string_to_int().greater_than_eq(date_start) the query works and I get the data I want. If I include then I get the error. So...what I cant figure out is why I cant use the last filter. My key looks like this. 07712f3d-6661-44a7-89ff-4e4ed3b5f5a6:cid5989410021:agid7744464312:2012122205:SG uuid:campaign_id:aid:date:country where date is in MMDDHH format I just want a query that will filter by campaign_id, country, and between a date range. thanks On Thu, Dec 20, 2012 at 11:07 PM, Bryan Fink br...@basho.com wrote: On Thu, Dec 20, 2012 at 8:52 AM, David Montgomery davidmontgom...@gmail.com wrote: {fitting_exited_abnormally,[{lineno,3},{message,TypeError: reduce of empty array with no initial value} Hi, David. The error is occurring on line three of your reduce function, where it calls `values.reduce`: query.reduce(''' function(values, arg){ return [ values.reduce( function(acc, item) { ... It's complaining about the fact that `values` is an empty array, so `values.reduce` doesn't know what to do. You will need to either include an initial value as a second parameter, like: values.reduce(function(acc, item) { ... }, {}) //initial empty object Or check to see if `values` is empty before reducing, like the Riak JS reduce builtins do: https://github.com/basho/riak_kv/blob/master/priv/mapred_builtins.js#L68 Cheers, Bryan ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
CRASH REPORT Process TypeError: reduce of empty array with no initial value},{source,unknown}]}
2012-12-20 13:31:27.906 [error] 0.30794.1 CRASH REPORT Process 0.30794.1 with 0 neighbours exited with reason: {fitting_exited_abnormally,[{lineno,3},{message,TypeError: reduce of empty array with no initial value},{source,unknown}]} in gen_fsm:terminate/7 line 611 2012-12-20 13:31:27.912 [error] 0.213.0 Supervisor riak_pipe_builder_sup had child undefined started with {riak_pipe_builder,start_link,undefined} at 0.30794.1 exit with reason {fitting_exited_abnormally,[{lineno,3},{message,TypeError: reduce of empty array with no initial value},{source,unknown}]} in context child_terminated Hi, The above is the crash report I get when I run the below script using the api. The api does not return an error. I have to look at the logs on the node running the query else will run till timeout is reached. query.map(''' function(value, keyData, arg) { var data = Riak.mapValuesJson(value)[0]; if(data['adx']=='gdn'){ try{ var matches = data['url'].match(/^https?\:\/\/([^\/?#]+)(?:[\/?#]|$)/i); var alt_key = matches matches[1]; } catch(err){ var alt_key = 'error'; } obj[alt_key] = 1; return [ obj ]; }else{ return []; } }''') query.reduce(''' function(values, arg){ return [ values.reduce( function(acc, item) { for (var state in item) { if (acc[state]) acc[state] += item[state]; else acc[state] = item[state]; } return acc; })]; } ''') Thanks ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
where did my data go when I write to riak?
Hi, I am using latest ubuntu release for 64. I am writing to riak and i get no errors. Then againgiven a key...,I also get no data. Where did it go? How does one debug riak when you write to riak..it looks like a success, get no data when get by key and get no errors. To start, I wrote to riak using this bucket. I am using the python api. impression_bucket = client.bucket('impressions') impression_bucket.set_n_val(2) impression_bucket.set_dw(1) worker_bucket = impression_bucket.new(id, data=qs) print worker_bucket.store() When I write I get thus object whic is my only idnication of success. I even changed dw to 2. riak.riak_object.RiakObject object at 0x3027f10 When I query the keyI get nothing. So.rather confused on how I can write and the data is now lost. Yes...the buckets are the same for a write and a read. I print how the id as key and data as value and all valid entries. I have data stored on an external device. Data is stored in /data/riak and riak is total permissions. In that dir I see these folders. kv_vnode lost+found mr_queue. Riak control looks 100% ok with all green lights Only option I changed was {platform_data_dir, /data/riak}, in app.config. Thanks ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Failed to create ~p for mapred_queue_dir defaulting to %s: ~p [/data/riak/mr_queue, /tmp/mr_queue, eacces]
Hi, I cant get riak to work when changing default data directory to /data/riak. I am using the current realease for ubunt\u %% Platform-specific installation paths (substituted by rebar) {platform_bin_dir, /usr/sbin}, {platform_data_dir, /data/riak}, {platform_etc_dir, /etc/riak}, {platform_lib_dir, /usr/lib/riak}, {platform_log_dir, /var/log/riak} Everytime I start the service it crashes. Tee volume is there. Attached and formatted at boot. I can write /data/riak. After a chef boot etc I restart riak so riak should know that path is there to store data. I format the volume as follows. mkfs.ext3 /dev/vdb and mount /data/riak even before riak is installed. Below is the head of the console.log file. First error is Failed to create ~p for mapred_queue_dir defaulting to %s: ~p [/data/riak/mr_queue,/tmp/mr_queue,eacces] 2012-12-18 04:36:22.583 [warning] 0.148.0@riak_core_ring_manager:reload_ring:231 No ring file available. 2012-12-18 04:36:22.755 [info] 0.154.0@riak_core_capability:process_capability_changes:529 New capability: {riak_core,vnode_routing} = proxy 2012-12-18 04:36:22.763 [info] 0.154.0@riak_core_capability:process_capability_changes:529 New capability: {riak_core,staged_joins} = true 2012-12-18 04:36:22.773 [info] 0.7.0 Application riak_core started on node 'riak@103.4.112.17' 2012-12-18 04:36:22.828 [info] 0.278.0@riak_core:wait_for_application:425 Waiting for application riak_pipe to start (0 seconds). 2012-12-18 04:36:22.829 [info] 0.7.0 Application riak_pipe started on node 'riak@103.4.112.17' 2012-12-18 04:36:22.876 [info] 0.289.0@riak_core:wait_for_service:445 Waiting for service riak_kv to start (0 seconds) 2012-12-18 04:36:22.929 [info] 0.301.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_map) host starting (0.301.0) 2012-12-18 04:36:22.933 [info] 0.278.0@riak_core:wait_for_application:419 Wait complete for application riak_pipe (0 seconds) 2012-12-18 04:36:22.934 [info] 0.302.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_map) host starting (0.302.0) 2012-12-18 04:36:22.952 [info] 0.303.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_map) host starting (0.303.0) 2012-12-18 04:36:22.965 [info] 0.304.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_map) host starting (0.304.0) 2012-12-18 04:36:22.967 [info] 0.309.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_map) host starting (0.309.0) 2012-12-18 04:36:22.970 [info] 0.316.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_map) host starting (0.316.0) 2012-12-18 04:36:22.983 [info] 0.323.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_map) host starting (0.323.0) 2012-12-18 04:36:22.988 [info] 0.436.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_map) host starting (0.436.0) 2012-12-18 04:36:22.993 [info] 0.438.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_reduce) host starting (0.438.0) 2012-12-18 04:36:23.002 [info] 0.439.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_reduce) host starting (0.439.0) 2012-12-18 04:36:23.004 [info] 0.440.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_reduce) host starting (0.440.0) 2012-12-18 04:36:23.009 [info] 0.441.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_reduce) host starting (0.441.0) 2012-12-18 04:36:23.015 [info] 0.442.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_reduce) host starting (0.442.0) 2012-12-18 04:36:23.020 [info] 0.443.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_reduce) host starting (0.443.0) 2012-12-18 04:36:23.025 [info] 0.445.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_hook) host starting (0.445.0) 2012-12-18 04:36:23.030 [info] 0.446.0@riak_kv_js_vm:init:76 Spidermonkey VM (thread stack: 16MB, max heap: 8MB, pool: riak_kv_js_hook) host starting (0.446.0) 2012-12-18 04:36:23.038 [warning] 0.448.0@riak_kv_map_master:init_data_dir:256 FORMAT ERROR: Failed to create ~p for mapred_queue_dir defaulting to %s: ~p [/data/riak/mr_queue,/tmp/mr_queue,eacces] 2012-12-18 04:36:23.073 [info] 0.154.0@riak_core_capability:process_capability_changes:529 New capability: {riak_kv,vnode_vclocks} = true 2012-12-18 04:36:23.088 [info] 0.154.0@riak_core_capability:process_capability_changes:529 New capability: {riak_kv,legacy_keylisting} = false 2012-12-18 04:36:23.100 [info]
Re: nebie with volumes and best practices with /var/lib/riak data directory
Thanks.,that helps... So..what happens if I have in existing path on a smaller drive and I attach a new volume and switch? In essence looking for best practice on how ot handle this. Is it better to do a rolling upgrade where I start new machines fresh with 1T then remove the smaller nodes 1 by one? So...I have three nodes with 20 gigs each. Increase to 6 nodes. Then one one by remove the old nod and now back to three with 1T each. It does seem that a rolling upgrade involves much less risk. Thanks On Wed, Dec 12, 2012 at 2:13 PM, Sean Carey ca...@basho.com wrote: Hey David, You are correct. You can do 1 of two things here: 1) Mount your 1TB volume to /var/lib/riak. All ring, and backend data will be stored there. 2) Mount your volume somewhere like /data/riak and switch the platform_data_dir config option to reflect that mounted path. If you wanted to snapshot your riak data, it all lives in * platform_data_dir* * * * * Hope this helps, Sean @densone On Wednesday, December 12, 2012 at 12:32 AM, David Montgomery wrote: Hi, I use datapipe for by cloud provider. An I am rather new from a sysadmin perspective of having to attach volumes. I am using ubuntu 12.04 64. So...now that I can attach a 1T volume on machine bootby question is this. When I mount the new volume.should I mount /var/lib/riak? I see that the data directory is {platform_data_dir, /var/lib/riak}, from the app.config file. Thus data will be written to the new drive solely this making use of the 1T? Data will not be stored on the default volume which is 20 gigs on datapipe. Then at this point I can snapshot the volume to capture all data? Thanks David ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
nebie with volumes and best practices with /var/lib/riak data directory
Hi, I use datapipe for by cloud provider. An I am rather new from a sysadmin perspective of having to attach volumes. I am using ubuntu 12.04 64. So...now that I can attach a 1T volume on machine bootby question is this. When I mount the new volume.should I mount /var/lib/riak? I see that the data directory is {platform_data_dir, /var/lib/riak}, from the app.config file. Thus data will be written to the new drive solely this making use of the 1T? Data will not be stored on the default volume which is 20 gigs on datapipe. Then at this point I can snapshot the volume to capture all data? Thanks David ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Mal Reduce Exception: {phase:0,error:[timeout],input:
Hi, I am having an issues with getting data out of MR.. If I run on a few particular days I get the below error and returns almost immediate. On other days returns valid data. I am using a key filter to select by day when I noticed that a date range was failing. key_filter.starts_with('20121207'). I am running on a three node cluster with 2 cores each an 4 gigs of ram. When it works it returns data pretty fast. So..per the below error..a key did exist from that day. Even further, I can get the values of that key in the error. Why the time out error? I have a generous timeout. for result in query.run(timeout=30): #print pprint(result) for k,v in result.iteritems(): print k,v Traceback (most recent call last): File /home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbWorkerServer/riak/mapreduce_v1.py, line 61, in module main() File /home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbWorkerServer/riak/mapreduce_v1.py, line 57, in main for result in query.run(timeout=3): File /usr/local/lib/python2.7/dist-packages/riak-1.5.0-py2.7.egg/riak/mapreduce.py, line 232, in run result = t.mapred(self._inputs, query, timeout) File /usr/local/lib/python2.7/dist-packages/riak-1.5.0-py2.7.egg/riak/transports/pbc.py, line 454, in mapred _handle_response) File /usr/local/lib/python2.7/dist-packages/riak-1.5.0-py2.7.egg/riak/transports/pbc.py, line 548, in send_msg_multi msg_code, resp = self.recv_msg(conn, expect) File /usr/local/lib/python2.7/dist-packages/riak-1.5.0-py2.7.egg/riak/transports/pbc.py, line 589, in recv_msg raise Exception(msg.errmsg) Exception: {phase:0,error:[timeout],input:{\impressions\,\0109d84f-20d9-48cd-b098-716e016cae9b:cid6587015966:agid5748040653:2012120107:SG\},type:forward_preflist,stack:[]} Sohow do I get data out without causing an exception? Otherwise I have to write a loop that skips a day which does not quite seem right. Thanks ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
error with MR and someone had painted it blue
I am trying to run a mr. I get the below error. Other than a 500 error...I dont see any other means to debug. What does the error mean? Traceback (most recent call last): File /home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbWorkerServer/mapred_nokeypy.py, line 69, in module main() File /home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbWorkerServer/mapred_nokeypy.py, line 59, in main for result in query.run(timeout=300): File /usr/local/lib/python2.7/dist-packages/riak-1.5.0-py2.7.egg/riak/mapreduce.py, line 232, in run result = t.mapred(self._inputs, query, timeout) File /usr/local/lib/python2.7/dist-packages/riak-1.5.0-py2.7.egg/riak/transports/http.py, line 321, in mapred (repr(response[0]), repr(response[1]))) Exception: Error running MapReduce operation. Headers: {'date': 'Sun, 09 Dec 2012 07:10:40 GMT', 'content-length': '190', 'content-type': 'application/json', 'http_code': 500, 'server': 'MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)'} Body: '{phase:0,error:[timeout],input:{\\impressions\\,\\0034fad8-8216-4e12-95e9-abcf4af498ca:cid6587015966:agid5748040653:2012112805:SG\\},type:forward_preflist,stack:[]}' ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
python map reduce and secondary indexes
Hi, Given that map reduce is the primary way of getting data out of riak, and i use python api, I am hard pressed to find any simple examples. Not even on the officially supported riak python api. Below is how I add a record to riak: id = %s:%s:%s:%s:%s % (str(uuid4()),campaign_id,aid,da,country) worker_bucket = impression_bucket.new(id, data=qs) worker_bucket.add_index('field1_bin', campaign_id) worker_bucket.add_index('field2_bin', aid) worker_bucket.add_index('field3_bin', country) worker_bucket.add_index('field4_bin', da) So If I want to get all records and sum up by country and date is in a date range then how? I am ok with the reduce portion but not clear on the map portion. How do I add a index for country=US and da201207 and da201212? client = riak.RiakClient(host='103.4.112.103') query = client.add('impressions') query.map(''' function(value, keyData, arg) { var data = Riak.mapValuesJson(value)[0]; var alt_key = data['hw'] + '_' + data['ssp']; var obj = {}; obj[alt_key] = 1; return [ obj ]; }''') Thanks ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
CRASH REPORT Process 0.549.0 with 0 neighbours exited with reason:
Hi, I am trying to add a new nodes. I get node un reachable when handing off. I restart the new node and dies quick. Below is the output of the error log. Sohow do I resolve? 2012-12-05 02:50:35.335 [error] 0.14969.101@riak_core_handoff_sender:start_fold:210 ownership_handoff transfer of riak_kv_vnode from 'riak@103.4.112.53' 296867520082839655260123481645494988367611297792 to 'riak@103.4.112.103' 296867520082839655260123481645494988367611297792 failed because of closed 2012-12-05 02:50:35.348 [error] 0.152.0@riak_core_handoff_manager:handle_info:274 An outbound handoff of partition riak_kv_vnode 296867520082839655260123481645494988367611297792 was terminated for reason: {shutdown,{error,closed}} 2012-12-05 02:50:35.688 [error] 0.14445.101@riak_core_handoff_sender:start_fold:210 ownership_handoff transfer of riak_kv_vnode from 'riak@103.4.112.53' 388211372416021087647853783690262677096107081728 to 'riak@103.4.112.103' 388211372416021087647853783690262677096107081728 failed because of closed 2012-12-05 02:50:35.694 [error] 0.152.0@riak_core_handoff_manager:handle_info:274 An outbound handoff of partition riak_kv_vnode 388211372416021087647853783690262677096107081728 was terminated for reason: {shutdown,{error,closed}} 2012-12-05 02:50:53.487 [error] 0.15966.101@riak_core_handoff_sender:start_fold:210 ownership_handoff transfer of riak_kv_vnode from 'riak@103.4.112.53' 296867520082839655260123481645494988367611297792 to 'riak@103.4.112.103' 296867520082839655260123481645494988367611297792 failed because of closed 2012-12-05 02:50:53.494 [error] 0.152.0@riak_core_handoff_manager:handle_info:274 An outbound handoff of partition riak_kv_vnode 296867520082839655260123481645494988367611297792 was terminated for reason: {shutdown,{error,closed}} 2012-12-05 03:29:23.686 [error] 0.549.0@riak_kv_vnode:init:265 Failed to start riak_kv_eleveldb_backend Reason: {db_open,Corruption: CURRENT file does not end with newline} 2012-12-05 03:29:23.751 [error] 0.169.0 gen_server riak_core_vnode_manager terminated with reason: no match of right hand value {error,{db_open,Corruption: CURRENT file does not end with newline}} in riak_core_vnode_manager:get_vnode/3 line 489 2012-12-05 03:29:23.767 [error] 0.169.0 CRASH REPORT Process riak_core_vnode_manager with 0 neighbours exited with reason: no match of right hand value {error,{db_open,Corruption: CURRENT file does not end with newline}} in riak_core_vnode_manager:get_vnode/3 line 489 in gen_server:terminate/6 line 747 2012-12-05 03:29:23.775 [error] 0.549.0 CRASH REPORT Process 0.549.0 with 0 neighbours exited with reason: {db_open,Corruption: CURRENT file does not end with newline} in gen_fsm:init_it/6 line 371 ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Riak node is a member of two clusters?
I have three nodes. I go to riak control on one machine I see nodes A B On anther node I see B C I though riak was supposed to prevent this from happening. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Map Reduce and long queries -
Hi, Below is my code for running a map reduce in python. I have a six node cluster, 2 cores each with 4 gigs for ram. I am no load and about 3 Mill keys and using leveldb with riak 1.2. Doing the below is taking a terribly long time. Never finished and I dont even know how I can check if it is even running other than the python script has not timed out. I look at the number of executed mappers in stats and it is flat lined when looking at Graphite. On test queries the below works. So..how do I debug what is going on? def main(): client = riak.RiakClient(host=riak_host,port=8087,transport_class=riak.transports.pbc.RiakPbcTransport) query = client.add(bucket) filters = key_filter.tokenize(:, filter_map['date']) + (key_filter.starts_with('201210')) # key_filter.tokenize(:, filter_map['country']).eq(US) \ # key_filter.tokenize(:, filter_map['campaign_id']).eq(t1) \ query.add_key_filters(filters) query.map(''' function(value, keyData, arg) { var data = Riak.mapValuesJson(value)[0]; if(data['adx']=='gdn'){ var alt_key = data['hw']; var obj = {}; obj[alt_key] = 1; return [ obj ]; }else{ return []; } }''') query.reduce(''' function(values, arg){ return [ values.reduce( function(acc, item) { for (var state in item) { if (acc[state]) acc[state] += item[state]; else acc[state] = item[state]; } return acc; })]; } ''') for result in query.run(timeout=30): print result ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
map reduce time outs
Hi, I am trying to run this map reduce job. I get a time out. Logic would dictate that I add a time out to the RiakClient but that flag is not there. How do I deal with timeouts with the python api? I have data...I just cant get it out. ile /usr/local/lib/python2.7/dist-packages/riak-1.5.0-py2.7.egg/riak/transports/http.py, line 321, in mapred (repr(response[0]), repr(response[1]))) Exception: Error running MapReduce operation. Headers: {'date': 'Sat, 13 Oct 2012 07:26:31 GMT', 'content-length': '19', 'content-type': 'application/json', 'http_code': 500, 'server': 'MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)'} Body: '{error:timeout}' client = riak.RiakClient(host='111.111.111.111',port=8087, transport_class=riak.RiakPbcTransport) query = client.add('impressions') query.map(''' function(value, keyData, arg) { var data = Riak.mapValuesJson(value)[0]; if(data['adx']=='gdn'){ var alt_key = data['hw'] + '_' + data['adx']; var obj = {}; obj[alt_key] = 1; return [ obj ]; }else{ return []; } }''') query.reduce(''' function(values, arg){ return [ values.reduce( function(acc, item) { for (var state in item) { if (acc[state]) acc[state] += item[state]; else acc[state] = item[state]; } return acc; })]; } ''') for result in query.run(): print result ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Getting Stats into a python dict
Hi, I see in the riak control I can see stats. https://111.111.11.111:8069/stats Does the python api support getting the stats into a dictionary? Thanks ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Fastest write to riak
Hi, I am using the python riak api and upon stress testing I would have had a death spiral but my riak cluster was not tuned. I will now set the bucket N to 2 e.g. test_bucket.set_n_val(2) Now what is left is w and dw. I would like the option as least getting that is most 1 write occurred. Also want the option to write and dump. I can afford data loss. I am using levelDB. if I want an ack for one write, do I use test_bucket.set_dw(1)? How do I use test_bucket.set_w(0)? Is there any other tweaks I can do to write with extreme speed? I feel I have levelDB tuned. I have 10 workers each one core in a gevent loop popping a redis queue. When I pop the queue if I dont write to redis, the queue is at most 1. If I write to to riakdeath spiral and the redis queue will grow till death. Granted that I did not optimize riak. I expect 5K qps and I am hoping that I can get a clear picture on how to set bucket properties optimally. Thanks ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Riak Control and issues with cluster
Hi, I am using riak control now. I just booted up 5 nodes with no data. I use chef to boot up the cluster and when booted I ssh into the machine and run riak-admin cluster join riak@111.111.111.111 where 111.111.111.111 is a randomly chosen ip address in the group. I go to riak control and this is what I get. 1) I see two green lights 2) On of the green lights has 100% of the partitions 3) The other three have a status of joining. I waited and waited, and tried riak control on other machines. Same story. How do I resolve? Thanks Sent from my iPad ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
How Reduce RAM usage as much as possible
Hi, From my understanding keys are stored in RAM. For the interim, I will have 5 nodes with one core each and 1 gig of RAM. I am not doing any big MR jobs for a while and when do, off hours. How do I configure riak to be RAM friendly? Are there knobs I can tweak? Thanks Sent from my iPad ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
how to do a key filter using the python api for map reduce
Hi, I am using the python api. I need to do a key filter kinda like this. inputs:{ bucket:impressions key_filters:[[tokenize, :, 2],[starts_with, 2012]] }, But from the python api docs...I have a query that looks like this. query = client.add('impressions') query.map(function(v) { var data = JSON.parse(v.values[0].data); return [[v.key, data]]; }) for result in query.run(): print %s - %s % (result[0], result[1]) Where and how do I place the key_filters[]? Thanks Sent from my iPad ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: how to do a key filter using the python api for map reduce
I did not scroll down far enough in the api docs regarding python. Problem solved. Sent from my iPad On Aug 30, 2012, at 6:51 PM, David Montgomery davidmontgom...@gmail.com wrote: Hi, I am using the python api. I need to do a key filter kinda like this. inputs:{ bucket:impressions key_filters:[[tokenize, :, 2],[starts_with, 2012]] }, But from the python api docs...I have a query that looks like this. query = client.add('impressions') query.map(function(v) { var data = JSON.parse(v.values[0].data); return [[v.key, data]]; }) for result in query.run(): print %s - %s % (result[0], result[1]) Where and how do I place the key_filters[]? Thanks Sent from my iPad ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
riak.RiakError: 'this transport is not available (no protobuf)'
What does this error me and how do I resolve? I am using the latest version of riak for ubunut and the riak python api. File workerServer.py, line 57, in module client = riak.RiakClient(host='riak.hk.test.com',port=8087,transport_class=riak.transports.pbc.RiakPbcTransport) File /usr/local/lib/python2.7/dist-packages/riak-1.5.0-py2.7.egg/riak/client.py, line 78, in __init__ **transport_options) File /usr/local/lib/python2.7/dist-packages/riak-1.5.0-py2.7.egg/riak/transports/pbc.py, line 168, in __init__ raise RiakError(this transport is not available (no protobuf)) riak.RiakError: 'this transport is not available (no protobuf)' ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Trying to connect to riak from python client
I have riak installed on my local machine. I am not trying to do a cluster simulation on my local machine, a simple node node locally is fine. when I do a service ping I get a pong. When I run the below python client it just hangs. Below is the code. So...rather confused on where to go at this point. So...how to I write to riak? import riak client = riak.RiakClient(host='127.0.0.1',port=8087) bucket = client.bucket('test') # Supply a key to store data under. # The ``data`` can be any data Python's ``json`` encoder can handle. person = bucket.new('riak_developer_1', data={ 'name': 'John Smith', 'age': 28, 'company': 'Mr. Startup!', }) # Save the object to Riak. person.store() ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Riak failed to start within 15 seconds,
Hi, I am new to riak. I followed the instructions for a cluster setup at https://wiki.basho.com/Basic-Cluster-Setup.html. It did not work. I am using ubuntu64 I changed the ip in app.confing and vm.args and stoped the service. then I ran the below. riak-admin reip riak@127.0.0.1 r...@xxx.xxx.xxx.xxx Attempting to restart script through sudo -u riak Backed up existing ring file to /var/lib/riak/ring/riak_core_ring.default.20120724221850.BAK New ring file written to /var/lib/riak/ring/riak_core_ring.default.20120725161924 root@i-157-16777-VM:~# sudo service riak start Riak failed to start within 15 seconds, see the output of 'riak console' for more information. If you want to wait longer, set the environment variable WAIT_FOR_ERLANG to the number of seconds to wait. riak-admin status | grep ring_members yield no resutls I am now at a deadend. How to resolve? ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
error compiling cpp driver for riak
I tried to compile the cpp driver code on gitbub and the below are the erros. I am on Ubuntu 64 using gcc 4.5. At the prompt I typed scons So...what do I do? Thanks test@test:~/Downloads/riak-cpp$ scons scons: Reading SConscript files ... scons: done reading SConscript files. scons: Building targets ... protoc build/riak/riakclient.proto --cpp_out=. (compile) build/riak/client.cxx In file included from ./riak/client.hxx:2:0, from build/riak/client.cxx:1: ./riak/message.hxx:29:28: error: ‘error_code’ is not a member of ‘std’ ./riak/message.hxx:29:76: error: functional cast expression list treated as compound expression ./riak/message.hxx:29:77: error: template argument 1 is invalid ./riak/message.hxx:29:86: error: invalid type in declaration before ‘;’ token build/riak/client.cxx: In member function ‘void riak::client::delete_object(const riak::key, const riak::key, riak::delete_response_handler)’: build/riak/client.cxx:95:105: error: cannot convert ‘std::_Bindbool (*(std::functionvoid(const std::error_code, const std::basic_stringchar, const std::basic_stringchar), std::basic_stringchar, std::basic_stringchar, std::_Placeholder1, std::_Placeholder2, std::_Placeholder3))(std::functionvoid(const std::error_code, const std::basic_stringchar, const std::basic_stringchar), const std::basic_stringchar, const std::basic_stringchar, const std::error_code, long unsigned int, const std::basic_stringchar)’ to ‘riak::message::handler’ in initialization build/riak/client.cxx: In member function ‘void riak::client::get_object(const riak::key, const riak::key, riak::get_response_handler)’: build/riak/client.cxx:171:70: error: cannot convert ‘std::_Bindbool (*(std::basic_stringchar, std::basic_stringchar, std::functionstd::shared_ptrRpbContent(const google::protobuf::RepeatedPtrFieldRpbContent), riak::unnamed::delivery_arguments, std::functionvoid(const std::error_code, std::shared_ptrRpbContent, std::functionvoid(const std::shared_ptrRpbContent, std::functionvoid(const std::error_code))), std::_Placeholder1, std::_Placeholder2, std::_Placeholder3))(const std::basic_stringchar, const std::basic_stringchar, std::functionstd::shared_ptrRpbContent(const google::protobuf::RepeatedPtrFieldRpbContent), riak::unnamed::delivery_arguments, std::functionvoid(const std::error_code, std::shared_ptrRpbContent, std::functionvoid(const std::shared_ptrRpbContent, std::functionvoid(const std::error_code))), const std::error_code, long unsigned int, const std::basic_stringchar)’ to ‘riak::message::handler’ in initialization build/riak/client.cxx: In function ‘riak::message::handler riak::unnamed::make_resolution_response_handler(std::shared_ptrRpbContent, riak::unnamed::resolution_response_handler_for_object)’: build/riak/client.cxx:246:63: error: cannot convert ‘std::_Bindstd::functionbool(std::shared_ptrRpbContent, const std::error_code, long unsigned int, const std::basic_stringchar)(std::shared_ptrRpbContent, std::_Placeholder1, std::_Placeholder2, std::_Placeholder3)’ to ‘riak::message::handler’ in return build/riak/client.cxx: In function ‘void riak::unnamed::put_cold(const riak::key, const riak::key, const std::shared_ptrRpbContent, riak::unnamed::delivery_arguments, riak::put_response_handler)’: build/riak/client.cxx:350:107: error: cannot convert ‘std::_Bindbool (*(std::functionvoid(const std::error_code), std::_Placeholder1, std::_Placeholder2, std::_Placeholder3))(std::functionvoid(const std::error_code), const std::error_code, long unsigned int, const std::basic_stringchar)’ to ‘riak::message::handler’ for argument ‘3’ to ‘void riak::unnamed::send_put_request(RpbPutReq, riak::unnamed::delivery_arguments, riak::message::handler)’ scons: *** [build/riak/client.o] Error 1 scons: building terminated because of errors. ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com