percolation against same non-changed docs?

2014-09-12 Thread sabdalla80
I understand the concept of how docs are checked against existing 
percolators. One thing I am not clear on, does elasticSearch run against 
percolators again for same un-changed documents?

eg. 
I just indexed 5 million docs and ran it against all percolators.
The next day, I ran the same 5 million docs again, does it check against 
the percolators again, or is it smart enough to know these are same docs? 
is percolating done in-memory, so it doesn't take a lot of time to get 
matches back?

Thanks

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/a0aa904d-c755-4c9e-ae21-fc6d06906e26%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: New Errors when upgraded from V1.0 to V1.1.0

2014-07-08 Thread sabdalla80
Yes, it was working fine on my 2 node cluster for a long time before 
upgrading. As a matter of fact, it still does, it indexes docs regardless 
of the exceptions being printed out. But, I never had the exceptions before 
upgrading. It is strange, because I can access the 2 nodes and the cluster 
is healthy and seems to be working fine.

On Tuesday, July 8, 2014 7:23:48 PM UTC-4, Mark Walkom wrote:
>
> Could be network connectivity, can you ping/telnet the other nodes ok?
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>  
>
> On 9 July 2014 09:03, sabdalla80 > wrote:
>
> I upgraded to the latest version V 1.1.0 and I started getting new 
> errors/Exceptions as showing below.
> Any ideas?
>
>
>
>
>
> INFO: [Madelyne Pryor] loaded [], sites []
>
> Jul 08, 2014 10:57:10 PM org.elasticsearch.client.transport
>
> INFO: [Madelyne Pryor] failed to get local cluster state for 
> [#transport#-1][domU-12-31-39-03-2A-0D][inet[/54.89.239.153:9300]], 
> disconnecting...
>
> org.elasticsearch.transport.RemoteTransportException: Failed to 
> deserialize response of type [org.elasticsearch.action.admin.cluster.state
> .ClusterStateResponse]
>
> Caused by: org.elasticsearch.transport.TransportSerializationException: 
> Failed to deserialize response of type [org.elasticsearch.action.admin.
> cluster.state.ClusterStateResponse]
>
> at org.elasticsearch.transport.netty.MessageChannelHandler.
> handleResponse(MessageChannelHandler.java:148)
>
> at org.elasticsearch.transport.netty.MessageChannelHandler.
> messageReceived(MessageChannelHandler.java:125)
>
> at org.elasticsearch.common.netty.channel.
> SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.
> java:70)
>
> at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
> sendUpstream(DefaultChannelPipeline.java:564)
>
> at org.elasticsearch.common.netty.channel.
> DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(
> DefaultChannelPipeline.java:791)
>
> at org.elasticsearch.common.netty.channel.Channels.
> fireMessageReceived(Channels.java:296)
>
> at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder
> .unfoldAndFireMessageReceived(FrameDecoder.java:462)
>
> at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder
> .callDecode(FrameDecoder.java:443)
>
> at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder
> .messageReceived(FrameDecoder.java:303)
>
> at org.elasticsearch.common.netty.channel.
> SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.
> java:70)
>
> at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
> sendUpstream(DefaultChannelPipeline.java:564)
>
> at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.
> sendUpstream(DefaultChannelPipeline.java:559)
>
> at org.elasticsearch.common.netty.channel.Channels.
> fireMessageReceived(Channels.java:268)
>
> at org.elasticsearch.common.netty.channel.Channels.
> fireMessageReceived(Channels.java:255)
>
> at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.
> read(NioWorker.java:88)
>
> at org.elasticsearch.common.netty.channel.socket.nio.
> AbstractNioWorker.process(AbstractNioWorker.java:108)
>
> at org.elasticsearch.common.netty.channel.socket.nio.
> AbstractNioSelector.run(AbstractNioSelector.java:318)
>
> at org.elasticsearch.common.netty.channel.socket.nio.
> AbstractNioWorker.run(AbstractNioWorker.java:89)
>
> at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run
> (NioWorker.java:178)
>
> at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(
> ThreadRenamingRunnable.java:108)
>
> at org.elasticsearch.common.netty.util.internal.
> DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.lang.NoClassDefFoundError: org/apache/lucene/analysis/
> standard/StandardAnalyzer
>
> at org.elasticsearch.Version.fromId(Version.java:306)
>
> at org.elasticsearch.Version.readVersion(Version.java:176)
>
> at org.elasticsearch.cluster.node.DiscoveryNode.readFrom(
> DiscoveryNode.java:274)
>
> at org.elasticsearch.cluster.node.DiscoveryNode.read

Re: Best practice to backup index daily?

2014-07-08 Thread sabdalla80
This looks great. However I am not sure if I am missing anything, When I 
take a snapshot with curl, it works fine by taking the snapshot:

curl -XPUT 
http:local_host:9200/_snapshot/es_repository/snapshot_1?wait_for_completion=true
However with curator, it completes but no snapshots are actually taken when 
I check, Any ideas what I am missing?
curator snapshot --most-recent 3 --repository es_repository


2014-07-08T17:51:20.143 INFOmain:644  Job starting
... 

2014-07-08T17:51:20.144 INFOmain:654  Default 
timeout of 30 seconds is too low for command SNAPSHOT.  Overriding to 21,600 
seconds (6 hours). 

2014-07-08T17:51:20.144 INFO   _new_conn:188  Starting new 
HTTP connection (1): localhost 

2014-07-08T17:51:20.153 INFO log_request_success:57   GET 
http://localhost:9200/ 
[status:200 request:0.009s] 

2014-07-08T17:51:20.154 INFOcommand_loop:538  Beginning 
SNAPSHOT operations... 

2014-07-08T17:51:20.158 INFO log_request_success:57   GET 
http://localhost:9200/logstash-*/_settings?expand_wildcards=closed 
[status:200 request:0.003s] 

2014-07-08T17:51:20.158 INFO snap_latest_indices:508  Snapshot 
'latest' 3 indices operations completed. 

2014-07-08T17:51:20.158 INFOmain:671  Done in 0:00:
00.033295.





On Monday, July 7, 2014 6:06:06 PM UTC-4, Ivan Brusic wrote:
>
> The Elasticsearch curator now supports snapshots:
>
> https://github.com/elasticsearch/curator
>
> http://www.elasticsearch.org/blog/elasticsearch-curator-version-1-1-0-released/
>
> You would still need to use cron to schedule tasks, but it would be a 
> curator task instead of a direct curl request.
>
> Cheers,
>
> Ivan
>
>
> On Mon, Jul 7, 2014 at 1:12 PM, sabdalla80  > wrote:
>
>> I am able to take a snapshot of the index and back it up to AWS S3. What 
>> is the best way to automate this approach and have it done daily say every 
>> days at 12 midnight? 
>> I am aware that I can probably do it with crontab, but curious if other 
>> are doing it differently?
>>  
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/8620f7d9-b827-470d-8928-75c308e722cc%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/elasticsearch/8620f7d9-b827-470d-8928-75c308e722cc%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/7bc870f5-470a-432a-9217-e30c4002543f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Best practice to backup index daily?

2014-07-07 Thread sabdalla80
I am able to take a snapshot of the index and back it up to AWS S3. What is 
the best way to automate this approach and have it done daily say every 
days at 12 midnight? 
I am aware that I can probably do it with crontab, but curious if other are 
doing it differently?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/8620f7d9-b827-470d-8928-75c308e722cc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Problem Configuring AWS S3 for Backups

2014-07-04 Thread sabdalla80
David, Great that helped. At first I created the space in "node 1" config 
but didn't do anything. So, I commented the credentials section from "node 
2" config, That worked. So, I am a bit confused about this, Do I need to 
maintain both configs on the two instances, so they both have credentials 
in them, or do I just put it in "node 1" and since the cluster is 
configured, there will be no need to have credentials in "node 2" ?

Thanks

On Friday, July 4, 2014 11:03:45 AM UTC-4, David Pilato wrote:
>
> Try to add some spaces before cloud, so like this…
>
>
>   cloud:
>
>
>  aws:
>
>  access_key: X
>
>  secret_key: YYY
>
> discovery:
>
> type: ec2
>
> -- 
> *David Pilato* | *Technical Advocate* | *Elasticsearch.com*
> @dadoonet <https://twitter.com/dadoonet> | @elasticsearchfr 
> <https://twitter.com/elasticsearchfr>
>
>
> Le 4 juillet 2014 à 16:47:03, sabdalla80 (sabda...@gmail.com ) 
> a écrit:
>
> My cluster has two instances, I have the same setup on both instances. To 
> answer Ross's question, I use the curl command to register the repository 
> from the instance(s) itself. Do I need to do anything else on the 
> instance(s) as far as AWS credentials? 
>
> Here is what I have:
>  Enter # Elasticsearch Configuration Example 
> #
>
>
> # This file contains an overview of various configuration settings,
>
> # targeted at operations staff. Application developers should
>
> # consult the guide at <http://elasticsearch.org/guide>.
>
> #
>
> # The installation procedure is covered at
>
> # <
> http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html
> >.
>
> #
>
> # Elasticsearch comes with reasonable defaults for most settings,
>
> # so you can try it out without bothering with configuration.
>
> #
>
> # Most of the time, these defaults are just fine for running a production
>
> # cluster. If you're fine-tuning your cluster, or wondering about the
>
> # effect of certain configuration option, please _do ask_ on the
>
> # mailing list or IRC channel [http://elasticsearch.org/community].
>
>
> # Any element in the configuration can be replaced with environment 
> variables
>
> # by placing them in ${...} notation. For example:
>
> #
>
> # node.rack: ${RACK_ENV_VAR}
>
>
> # For information on supported formats and syntax for the config file, see
>
> # <
> http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html
> >
>
>
>
> ### Cluster 
> ###
>
>
> # Cluster name identifies your cluster for auto-discovery. If you're 
> running
>
> # multiple clusters on the same network, make sure you're using unique 
> names.
>
> #
>
>  cluster.name: rexCluster
>
>  Node 
> #
>
>
> # Node names are generated dynamically on startup, so you're relieved
>
> # from configuring them manually. You can tie this node to a specific name:
>
> #
>
>  node.name: "node 1"
>
>
> # Every node can be configured to allow or deny being eligible as the 
> master,
>
> # and to allow or deny to store the data.
>
> #
>
> # Allow this node to be eligible as a master node (enabled by default):
>
> #
>
>  node.master: true
>
> #
>
> # Allow this node to store data (enabled by default):
>
> #
>
> # node.data: true
>
>
> # You can exploit these settings to design advanced cluster topologies.
>
> #
>
> # 1. You want this node to never become a master node, only to hold data.
>
> #This will be the "workhorse" of your cluster.
>
> #
>
> # node.master: false
>
> # node.data: true
>
> #
>
> # 2. You want this node to only serve as a master: to not store any data 
> and
>
> #to have free resources. This will be the "coordinator" of your 
> cluster.
>
> #
>
> # node.master: true
>
> # node.data: false
>
> #
>
> # 3. You want this node to be neither master nor data node, but
>
> #to act as a "search load balancer" (fetching data from nodes,
>
> #aggregating results, etc.)
>
> #
>
> # node.master: false
>
> # node.data: false
>
>
> # Use the Cluster Health API [http://localhost:9200/_cluster/health], the
>
> # Node Info API [http://localhost:9200/_nodes] or GUI tools
>
> # such as <http://www.e

Re: Problem Configuring AWS S3 for Backups

2014-07-04 Thread sabdalla80
I also meant to mention that I commented out the credentials in the yml 
file, I was still getting same message, meaning that credentials aren't 
probably getting read for whatever reason.

On Friday, July 4, 2014 10:46:58 AM UTC-4, sabdalla80 wrote:
>
> My cluster has two instances, I have the same setup on both instances. To 
> answer Ross's question, I use the curl command to register the repository 
> from the instance(s) itself. Do I need to do anything else on the 
> instance(s) as far as AWS credentials?
>
> Here is what I have:
> Enter # Elasticsearch Configuration Example 
> #
>
>
> # This file contains an overview of various configuration settings,
>
> # targeted at operations staff. Application developers should
>
> # consult the guide at <http://elasticsearch.org/guide>.
>
> #
>
> # The installation procedure is covered at
>
> # <
> http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html
> >.
>
> #
>
> # Elasticsearch comes with reasonable defaults for most settings,
>
> # so you can try it out without bothering with configuration.
>
> #
>
> # Most of the time, these defaults are just fine for running a production
>
> # cluster. If you're fine-tuning your cluster, or wondering about the
>
> # effect of certain configuration option, please _do ask_ on the
>
> # mailing list or IRC channel [http://elasticsearch.org/community].
>
>
> # Any element in the configuration can be replaced with environment 
> variables
>
> # by placing them in ${...} notation. For example:
>
> #
>
> # node.rack: ${RACK_ENV_VAR}
>
>
> # For information on supported formats and syntax for the config file, see
>
> # <
> http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html
> >
>
>
>
> ### Cluster 
> ###
>
>
> # Cluster name identifies your cluster for auto-discovery. If you're 
> running
>
> # multiple clusters on the same network, make sure you're using unique 
> names.
>
> #
>
>  cluster.name: rexCluster
>
>  Node 
> #
>
>
> # Node names are generated dynamically on startup, so you're relieved
>
> # from configuring them manually. You can tie this node to a specific name:
>
> #
>
>  node.name: "node 1"
>
>
> # Every node can be configured to allow or deny being eligible as the 
> master,
>
> # and to allow or deny to store the data.
>
> #
>
> # Allow this node to be eligible as a master node (enabled by default):
>
> #
>
>  node.master: true
>
> #
>
> # Allow this node to store data (enabled by default):
>
> #
>
> # node.data: true
>
>
> # You can exploit these settings to design advanced cluster topologies.
>
> #
>
> # 1. You want this node to never become a master node, only to hold data.
>
> #This will be the "workhorse" of your cluster.
>
> #
>
> # node.master: false
>
> # node.data: true
>
> #
>
> # 2. You want this node to only serve as a master: to not store any data 
> and
>
> #to have free resources. This will be the "coordinator" of your 
> cluster.
>
> #
>
> # node.master: true
>
> # node.data: false
>
> #
>
> # 3. You want this node to be neither master nor data node, but
>
> #to act as a "search load balancer" (fetching data from nodes,
>
> #aggregating results, etc.)
>
> #
>
> # node.master: false
>
> # node.data: false
>
>
> # Use the Cluster Health API [http://localhost:9200/_cluster/health], the
>
> # Node Info API [http://localhost:9200/_nodes] or GUI tools
>
> # such as <http://www.elasticsearch.org/overview/marvel/>,
>
> # <http://github.com/karmi/elasticsearch-paramedic>,
>
> # <http://github.com/lukas-vlcek/bigdesk> and
>
> # <http://mobz.github.com/elasticsearch-head> to inspect the cluster 
> state.
>
>
> # A node can have generic attributes associated with it, which can later 
> be used
>
> # for customized shard allocation filtering, or allocation awareness. An 
> attribute
>  
> # Path to directory containing configuration (this file and logging.yml):
>
> #
>
>  path.conf: /etc/elasticsearch
>
>
> # Path to directory where to store index data allocated for this node.
>
> #
>
> # path.data: /path/to/data
>
> #
>
> # Can optionally include more than one location, causing data to be 
&g

Re: Problem Configuring AWS S3 for Backups

2014-07-04 Thread sabdalla80
#

 discovery.zen.minimum_master_nodes: 2


# Set the time to wait for ping responses from other nodes when discovering.

# Set this option to a higher value on a slow or congested network

# to minimize discovery failures:

#

 discovery.zen.ping.timeout: 3s


# For more information, see

# 
<http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html>


# Unicast discovery allows to explicitly control which nodes will be used

# to discover the cluster. It can be used when multicast is not present,

# or to restrict the cluster communication-wise.

#

# 1. Disable multicast discovery (enabled by default):

#

 discovery.zen.ping.multicast.enabled: false

#

# 2. Configure an initial list of master nodes in the cluster

#to perform discovery when new nodes (master or data) are started:

#

 discovery.zen.ping.unicast.hosts: ["IP1","IP2"]


# EC2 discovery allows to use AWS EC2 API in order to perform discovery.

#

# You have to install the cloud-aws plugin for enabling the EC2 discovery.

#

# For more information, see

# 
<http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-ec2.html>

#

# See <http://elasticsearch.org/tutorials/elasticsearch-on-ec2/>

# for a step-by-step tutorial.

cloud:


 aws:

 access_key: X

 secret_key: YYY

discovery:

type: ec2



On Friday, July 4, 2014 2:27:48 AM UTC-4, David Pilato wrote:
>
> Agreed. Could you share your elasticsearch.yml file without touching 
> anything but only replacing Key/secret?
>
> Keep the formating.
>
> --
> David ;-)
> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>
>
> Le 4 juil. 2014 à 06:51, Ross Simpson > a 
> écrit :
>
> That specific exception (com.amazonaws.AmazonClientException) is thrown 
> by the AWS client libraries, and it means the library couldn't find your 
> AWS credentials.  I'm not sure why, as the details in your original post 
> look correct.
>
> FWIW, S3 snapshots are working well for me.  Here's my setup:
> ES 1.1.1
> AWS cloud plugin 2.1.0
>
> elasticsearch.yml:
> cloud.aws.access_key: ...
> cloud.aws.secret_key: ..
>
> Repo registration:
> $ curl -XPUT 'http://localhost:9200/_snapshot/es-backups' -d 
> '{"type":"s3","settings":{"compress":"true","base_path":"prod_backups","region":"us-east","bucket":"..."}}'
> {"acknowledged":true}
>
> In your latest post, it looks like you're running the command on a remote 
> ES host (10.211.154.24).  Does that specific host have the AWS credentials 
> in its ES config?  Snapshotting will require that *all* nodes in the 
> cluster have the AWS credentials, because they will each be writing to S3.
>
> Are there any relevant entries in the ES logs from startup?  
>
>
>
> On Friday, 4 July 2014 04:25:44 UTC+10, sabdalla80 wrote:
>>
>> I installed latest ES version too "1.2.1", still getting same error
>> {
>>"error": "RemoteTransportException[[node 
>> 2][inet[/10.211.154.24:9300]][cluster/repository/put]]; nested: 
>> RepositoryException[[es_repository] failed to create repository]; nested: 
>> CreationException[Guice creation errors:\n\n1) Error injecting constructor, 
>> com.amazonaws.AmazonClientException: Unable to load AWS credentials from 
>> any provider in the chain\n  at 
>> org.elasticsearch.repositories.s3.S3Repository.()\n  at 
>> org.elasticsearch.repositories.s3.S3Repository\n  at 
>> Key[type=org.elasticsearch.repositories.Repository, annotation=[none]]\n\n1 
>> error]; nested: AmazonClientException[Unable to load AWS credentials from 
>> any provider in the chain]; ",
>>"status": 500
>> }
>>
>> Any ideas? I would appreciate some feedback on how to figure out this 
>> problem because I would like to backup our index to S3.
>>
>> On Wednesday, July 2, 2014 3:36:58 PM UTC-4, sabdalla80 wrote:
>>>
>>> Unfortunately, I tried with and without the region setting, no 
>>> difference.
>>>
>>> On Tuesday, July 1, 2014 7:43:21 PM UTC-4, Glen Smith wrote:
>>>>
>>>> I'm not sure it matters, but I noticed you aren't setting a region in 
>>>> either your config or when registering your repo.
>>>>
>>>> On Tuesday, July 1, 2014 7:08:28 PM UTC-4, sabdalla80 wrote:
>>>>>
>>>>> I am not sure the version is the problem, I guess I can upgrade from 
>>>>> V1.1 to latest. 
>>>>> "Not able to load credential from

Re: Problem Configuring AWS S3 for Backups

2014-07-03 Thread sabdalla80
I installed latest ES version too "1.2.1", still getting same error
{
   "error": "RemoteTransportException[[node 
2][inet[/10.211.154.24:9300]][cluster/repository/put]]; nested: 
RepositoryException[[es_repository] failed to create repository]; nested: 
CreationException[Guice creation errors:\n\n1) Error injecting constructor, 
com.amazonaws.AmazonClientException: Unable to load AWS credentials from 
any provider in the chain\n  at 
org.elasticsearch.repositories.s3.S3Repository.()\n  at 
org.elasticsearch.repositories.s3.S3Repository\n  at 
Key[type=org.elasticsearch.repositories.Repository, annotation=[none]]\n\n1 
error]; nested: AmazonClientException[Unable to load AWS credentials from 
any provider in the chain]; ",
   "status": 500
}

Any ideas? I would appreciate some feedback on how to figure out this 
problem because I would like to backup our index to S3.

On Wednesday, July 2, 2014 3:36:58 PM UTC-4, sabdalla80 wrote:
>
> Unfortunately, I tried with and without the region setting, no difference.
>
> On Tuesday, July 1, 2014 7:43:21 PM UTC-4, Glen Smith wrote:
>>
>> I'm not sure it matters, but I noticed you aren't setting a region in 
>> either your config or when registering your repo.
>>
>> On Tuesday, July 1, 2014 7:08:28 PM UTC-4, sabdalla80 wrote:
>>>
>>> I am not sure the version is the problem, I guess I can upgrade from 
>>> V1.1 to latest. 
>>> "Not able to load credential from supply chain", Any idea this error is 
>>> generated, Is there any other place that my credentials need to be besides 
>>> .yml file?
>>> Note, I am able to write/read to S3 remotely, so I don't have any 
>>> priviliges problems that I can think of.
>>>
>>> On Tuesday, July 1, 2014 4:44:17 PM UTC-4, David Pilato wrote:
>>>>
>>>> I think 2.1.1 should work fine as well.
>>>>
>>>> That said, you should upgrade to latest 1.1 (or 1.2)...
>>>>
>>>> --
>>>> David ;-)
>>>> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>>>>
>>>>
>>>> Le 1 juil. 2014 à 22:13, Glen Smith  a écrit :
>>>>
>>>> According to
>>>> https://github.com/elasticsearch/elasticsearch-cloud-aws/tree/es-1.1
>>>> you should use v2.1.0 of the plugin with ES 1.1.0.
>>>>
>>>> On Tuesday, July 1, 2014 9:03:04 AM UTC-4, sabdalla80 wrote:
>>>>>
>>>>> I am having a problem setting up backup and restore part of AWS on S3. 
>>>>> I have 2.1.1 AWS plugin & ElasticSearch V1.1.0
>>>>>
>>>>> My yml:
>>>>>
>>>>> cloud:
>>>>> aws:
>>>>> access_key: #
>>>>> secret_key: #
>>>>>discovery:
>>>>> type: ec2
>>>>>
>>>>> When I try to register a repository:
>>>>>
>>>>> PUT /_snapshot/es_repository{
>>>>>
>>>>> "type": "s3",
>>>>>
>>>>> "settings": {
>>>>>
>>>>>   "bucket": "esbucket"
>>>>>
>>>>> }}
>>>>>
>>>>>
>>>>> I get this error, it complains about loading my credentials! Is this 
>>>>> ElasticSearch problem or AWS?
>>>>>
>>>>> Note I am running as root user "ubuntu" on Ec2 and also running AWS 
>>>>> with root privileges as opposed to IAM role, not sure if it's a problem 
>>>>> or 
>>>>> not.
>>>>>"error": "RepositoryException[[es_repository] failed to create 
>>>>> repository]; nested: CreationException[Guice creation errors:\n\n1) Error 
>>>>> injecting constructor, com.amazonaws.AmazonClientException: Unable to 
>>>>> load 
>>>>> AWS credentials from any provider in the chain\n  at 
>>>>> org.elasticsearch.repositories.s3.S3Repository.(Unknown Source)\n 
>>>>>  while locating org.elasticsearch.repositories.s3.S3Repository\n  while 
>>>>> locating org.elasticsearch.repositories.Repository\n\n1 error]; nested: 
>>>>> AmazonClientException[Unable to load AWS credentials from any provider in 
>>>>> the chain]; ",
>>>>>"status": 500
>>>>> }ode here...
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>  -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "elasticsearch" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to elasticsearc...@googlegroups.com.
>>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/elasticsearch/e7db355a-7710-4408-80de-60960fd16d1d%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/elasticsearch/e7db355a-7710-4408-80de-60960fd16d1d%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>> For more options, visit https://groups.google.com/d/optout.
>>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/acc97d08-94c8-4459-b80d-2a32d229e109%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Problem Configuring AWS S3 for Backups

2014-07-02 Thread sabdalla80
Unfortunately, I tried with and without the region setting, no difference.

On Tuesday, July 1, 2014 7:43:21 PM UTC-4, Glen Smith wrote:
>
> I'm not sure it matters, but I noticed you aren't setting a region in 
> either your config or when registering your repo.
>
> On Tuesday, July 1, 2014 7:08:28 PM UTC-4, sabdalla80 wrote:
>>
>> I am not sure the version is the problem, I guess I can upgrade from V1.1 
>> to latest. 
>> "Not able to load credential from supply chain", Any idea this error is 
>> generated, Is there any other place that my credentials need to be besides 
>> .yml file?
>> Note, I am able to write/read to S3 remotely, so I don't have any 
>> priviliges problems that I can think of.
>>
>> On Tuesday, July 1, 2014 4:44:17 PM UTC-4, David Pilato wrote:
>>>
>>> I think 2.1.1 should work fine as well.
>>>
>>> That said, you should upgrade to latest 1.1 (or 1.2)...
>>>
>>> --
>>> David ;-)
>>> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>>>
>>>
>>> Le 1 juil. 2014 à 22:13, Glen Smith  a écrit :
>>>
>>> According to
>>> https://github.com/elasticsearch/elasticsearch-cloud-aws/tree/es-1.1
>>> you should use v2.1.0 of the plugin with ES 1.1.0.
>>>
>>> On Tuesday, July 1, 2014 9:03:04 AM UTC-4, sabdalla80 wrote:
>>>>
>>>> I am having a problem setting up backup and restore part of AWS on S3. 
>>>> I have 2.1.1 AWS plugin & ElasticSearch V1.1.0
>>>>
>>>> My yml:
>>>>
>>>> cloud:
>>>> aws:
>>>> access_key: #
>>>> secret_key: #
>>>>discovery:
>>>> type: ec2
>>>>
>>>> When I try to register a repository:
>>>>
>>>> PUT /_snapshot/es_repository{
>>>>
>>>> "type": "s3",
>>>>
>>>> "settings": {
>>>>
>>>>   "bucket": "esbucket"
>>>>
>>>> }}
>>>>
>>>>
>>>> I get this error, it complains about loading my credentials! Is this 
>>>> ElasticSearch problem or AWS?
>>>>
>>>> Note I am running as root user "ubuntu" on Ec2 and also running AWS 
>>>> with root privileges as opposed to IAM role, not sure if it's a problem or 
>>>> not.
>>>>"error": "RepositoryException[[es_repository] failed to create 
>>>> repository]; nested: CreationException[Guice creation errors:\n\n1) Error 
>>>> injecting constructor, com.amazonaws.AmazonClientException: Unable to load 
>>>> AWS credentials from any provider in the chain\n  at 
>>>> org.elasticsearch.repositories.s3.S3Repository.(Unknown Source)\n 
>>>>  while locating org.elasticsearch.repositories.s3.S3Repository\n  while 
>>>> locating org.elasticsearch.repositories.Repository\n\n1 error]; nested: 
>>>> AmazonClientException[Unable to load AWS credentials from any provider in 
>>>> the chain]; ",
>>>>"status": 500
>>>> }ode here...
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>  -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/e7db355a-7710-4408-80de-60960fd16d1d%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/elasticsearch/e7db355a-7710-4408-80de-60960fd16d1d%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/56de92a0-6c58-43b5-b4cc-df7c613ba4e2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Problem Configuring AWS S3 for Backups

2014-07-01 Thread sabdalla80
I am not sure the version is the problem, I guess I can upgrade from V1.1 
to latest. 
"Not able to load credential from supply chain", Any idea this error is 
generated, Is there any other place that my credentials need to be besides 
.yml file?
Note, I am able to write/read to S3 remotely, so I don't have any 
priviliges problems that I can think of.

On Tuesday, July 1, 2014 4:44:17 PM UTC-4, David Pilato wrote:
>
> I think 2.1.1 should work fine as well.
>
> That said, you should upgrade to latest 1.1 (or 1.2)...
>
> --
> David ;-)
> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>
>
> Le 1 juil. 2014 à 22:13, Glen Smith > 
> a écrit :
>
> According to
> https://github.com/elasticsearch/elasticsearch-cloud-aws/tree/es-1.1
> you should use v2.1.0 of the plugin with ES 1.1.0.
>
> On Tuesday, July 1, 2014 9:03:04 AM UTC-4, sabdalla80 wrote:
>>
>> I am having a problem setting up backup and restore part of AWS on S3. 
>> I have 2.1.1 AWS plugin & ElasticSearch V1.1.0
>>
>> My yml:
>>
>> cloud:
>> aws:
>> access_key: #
>> secret_key: #
>>discovery:
>> type: ec2
>>
>> When I try to register a repository:
>>
>> PUT /_snapshot/es_repository{
>>
>> "type": "s3",
>>
>> "settings": {
>>
>>   "bucket": "esbucket"
>>
>> }}
>>
>>
>> I get this error, it complains about loading my credentials! Is this 
>> ElasticSearch problem or AWS?
>>
>> Note I am running as root user "ubuntu" on Ec2 and also running AWS with 
>> root privileges as opposed to IAM role, not sure if it's a problem or not.
>>"error": "RepositoryException[[es_repository] failed to create 
>> repository]; nested: CreationException[Guice creation errors:\n\n1) Error 
>> injecting constructor, com.amazonaws.AmazonClientException: Unable to load 
>> AWS credentials from any provider in the chain\n  at 
>> org.elasticsearch.repositories.s3.S3Repository.(Unknown Source)\n 
>>  while locating org.elasticsearch.repositories.s3.S3Repository\n  while 
>> locating org.elasticsearch.repositories.Repository\n\n1 error]; nested: 
>> AmazonClientException[Unable to load AWS credentials from any provider in 
>> the chain]; ",
>>"status": 500
>> }ode here...
>>
>>
>>
>>
>>
>>  -- 
> You received this message because you are subscribed to the Google Groups 
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to elasticsearc...@googlegroups.com .
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/elasticsearch/e7db355a-7710-4408-80de-60960fd16d1d%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/elasticsearch/e7db355a-7710-4408-80de-60960fd16d1d%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5b056cb9-e71b-4317-b8f5-416385f731c2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Problem Configuring AWS S3 for Backups

2014-07-01 Thread sabdalla80
I am having a problem setting up backup and restore part of AWS on S3. 
I have 2.1.1 AWS plugin & ElasticSearch V1.1.0

My yml:

cloud:
aws:
access_key: #
secret_key: #
   discovery:
type: ec2

When I try to register a repository:

PUT /_snapshot/es_repository{

"type": "s3",

"settings": {

  "bucket": "esbucket"

}}


I get this error, it complains about loading my credentials! Is this 
ElasticSearch problem or AWS?

Note I am running as root user "ubuntu" on Ec2 and also running AWS with 
root privileges as opposed to IAM role, not sure if it's a problem or not.
   "error": "RepositoryException[[es_repository] failed to create 
repository]; nested: CreationException[Guice creation errors:\n\n1) Error 
injecting constructor, com.amazonaws.AmazonClientException: Unable to load 
AWS credentials from any provider in the chain\n  at 
org.elasticsearch.repositories.s3.S3Repository.(Unknown Source)\n 
 while locating org.elasticsearch.repositories.s3.S3Repository\n  while 
locating org.elasticsearch.repositories.Repository\n\n1 error]; nested: 
AmazonClientException[Unable to load AWS credentials from any provider in 
the chain]; ",
   "status": 500
}ode here...





-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/89c4ac87-6f88-4b1a-a87e-c4482600b21c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.