Re: copy index

2014-10-22 Thread joergpra...@gmail.com
I can not use the HTTP request body because this is reserved for a search
request like in the _search endpoint. So you can push a part of the index
to a new index (the search hits).

The message "failed to get local cluster state for" is on INFO level, so I
think it is not an error.

A GUI is a long term project in another context, good for the whole
community. I am unsure how to develop a replacement for the sense plugin.
Maybe a firefox plugin will arrive some time. I don't know.

Jörg

On Wed, Oct 22, 2014 at 3:21 PM,  wrote:

> Hey Jorg,
>
> Correct. Whew!
>
> If I run just curl -XPOST 'localhost:9200/_push?map=\{"myindex":"
> myindexcopy"\}'
>
> it works fine.
>
> By the way : is there any way to make this work in "sense" eg
> POST /_push?map=\{"myindex":"myindexcopy"\}
> POST /_push
> {
>   "map": {
> ""myindex":"myindexcopy"
>   }
> }
>
> The second one will submit in "sense" but results in empty map={}
>
> And is there any plan to put a gui around it?
>
> Aside: I still see these errors in the ES logs
>
> [2014-10-22 13:46:25,736][INFO ][client.transport ] [Astronomer]
> failed to get local cluster state for [#transport#-2][HDQWK037][inet[/10.193
> org.elasticsearch.transport.RemoteTransportException: [Abigail
> Brand][inet[/10.193.5.155:9301]][cluster/state]
> Caused by: org.elasticsearch.transport.RemoteTransportException: [Abigail
> Brand][inet[/10.193.5.155:9301]][cluster/state]
>
> Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
> exceeded: 48
> at
> org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
> at
> org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
> at
> org.elasticsearch.common.io.stream.StreamInput.readVInt(StreamInput.java:141)
> at
> org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:272)
> at
> org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:61)
> at
> org.elasticsearch.common.io.stream.StreamInput.readStringArray(StreamInput.java:362)
> at
> org.elasticsearch.action.admin.cluster.state.ClusterStateRequest.readFrom(ClusterStateRequest.java:132)
> at
> org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:209)
> at
> org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:109)
> at
> org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> at
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> at
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> at
> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
> at
> org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
> at
> org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
> at
> org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
> at
> org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
> at
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> at
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
> at
> org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
> at
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
> at
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
> at
> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
> at
> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
> at
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
> at
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
> at
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
> at
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
> at
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
> at
> org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java

Re: copy index

2014-10-22 Thread eunever32
Hey Jorg,

Correct. Whew!

If I run just curl -XPOST 
'localhost:9200/_push?map=\{"myindex":"myindexcopy"\}'

it works fine.

By the way : is there any way to make this work in "sense" eg
POST /_push?map=\{"myindex":"myindexcopy"\}
POST /_push
{
  "map": {
""myindex":"myindexcopy"
  }
}

The second one will submit in "sense" but results in empty map={}

And is there any plan to put a gui around it?

Aside: I still see these errors in the ES logs

[2014-10-22 13:46:25,736][INFO ][client.transport ] [Astronomer] 
failed to get local cluster state for [#transport#-2][HDQWK037][inet[/10.193
org.elasticsearch.transport.RemoteTransportException: [Abigail 
Brand][inet[/10.193.5.155:9301]][cluster/state]
Caused by: org.elasticsearch.transport.RemoteTransportException: [Abigail 
Brand][inet[/10.193.5.155:9301]][cluster/state]
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit 
exceeded: 48
at 
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at 
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at 
org.elasticsearch.common.io.stream.StreamInput.readVInt(StreamInput.java:141)
at 
org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:272)
at 
org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:61)
at 
org.elasticsearch.common.io.stream.StreamInput.readStringArray(StreamInput.java:362)
at 
org.elasticsearch.action.admin.cluster.state.ClusterStateRequest.readFrom(ClusterStateRequest.java:132)
at 
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:209)
at 
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:109)
at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at 
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at 
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at 
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

On Wednesday, October 22, 2014 1:27:59 PM UTC+1, Jörg Prante wrote:

> I think you have to set up such a curl command like this
>
> curl -XPOST 
> 'localhost:9200/yourindex/_push?map=\{"yourindex":"yournewindex"\}'
>
> to push the index "yourindex" to another one. Note the endpoint. 
>
> How does your curl look like?
>
> Jörg

Re: copy index

2014-10-22 Thread joergpra...@gmail.com
I think you have to set up such a curl command like this

curl -XPOST
'localhost:9200/yourindex/_push?map=\{"yourindex":"yournewindex"\}'

to push the index "yourindex" to another one. Note the endpoint.

How does your curl look like?

Jörg

On Wed, Oct 22, 2014 at 1:27 PM,  wrote:

> Jorg,
>
> Thanks for the quick turnaround on putting in the fix.
>
> What I found when I tested is that it works for test, testcopy
>
> But when I try with myindex, myindexcopy doesn't work
>
> I noticed in the logs when I was trying "myindex" that it was looking for
> an index "test" which was a bit odd
>
> So I copied my "myindex" to an index named literally  "test" and only then
> it worked
> So the only index that can be copied is "test"
> The target index can be anything.
>
> Logs:
>
> [2014-10-22 12:05:07,649][INFO ][KnapsackPushAction   ] start of push:
> {"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}
> [2014-10-22 12:05:07,649][INFO ][KnapsackService  ] update cluster
> settings: plugin.knapsack.export.state ->
> [{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}]
> [2014-10-22 12:05:07,650][INFO ][KnapsackPushAction   ]
> map={myindex=myindexcopy}
> [2014-10-22 12:05:07,650][INFO ][KnapsackPushAction   ] getting
> settings for indices [test, myindex]
> [2014-10-22 12:05:07,651][INFO ][KnapsackPushAction   ] found indices:
> [test, myindex]
> [2014-10-22 12:05:07,652][INFO ][KnapsackPushAction   ] getting
> mappings for index test and types []
> [2014-10-22 12:05:07,652][INFO ][KnapsackPushAction   ] found
> mappings: [test]
> [2014-10-22 12:05:07,653][INFO ][KnapsackPushAction   ] adding
> mapping: test
> [2014-10-22 12:05:07,653][INFO ][KnapsackPushAction   ] creating
> index: test
> [2014-10-22 12:05:07,672][INFO ][KnapsackPushAction   ] count=2
> status=OK
>
> I guess you can put in a quick fix?
>
> I would have to ask if anyone is using this?
>
> And what are most people doing? Are there any plans by "ES" to create a
> product or does the snapshot feature suffice for most people?
>
> Again I just would repeat my requirements: I want to change the mapping
> types for an existing index. Therefore I create my new index and copy the
> old index data into the new.
>
> Thanks in advance.
>
> On Monday, October 20, 2014 8:42:48 PM UTC+1, Jörg Prante wrote:
>
>> I admit there is something overcautious in the knapsack release to
>> prevent overwriting existing data. I will add a fix that will allow writing
>> into an empty index.
>>
>> https://github.com/jprante/elasticsearch-knapsack/issues/57
>>
>> Jörg
>>
>> On Mon, Oct 20, 2014 at 6:47 PM,  wrote:
>>
>>> By the way
>>> Es version 1.3.4
>>> Knapsack version built with 1.3.4
>>>
>>>
>>> Regards.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/
>>> msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%
>>> 40googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH5G4xZxCTHVK-jTjKidMUKOpyNpjwvx-PzQ5xcK2SVZA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-22 Thread joergpra...@gmail.com
Yes, I can put up a fix - looks weird.

Most users have either a constant mapping that can extend dynamically, or
does not change on existing field.

If fields have to change for future documents, you can also change mapping
by using alias technique:

- old index with old fields (no change)

- new index created with changed fields

- assigning an index alias to both indices

- search on index alias

No copy required.

Jörg




On Wed, Oct 22, 2014 at 1:27 PM,  wrote:

> Jorg,
>
> Thanks for the quick turnaround on putting in the fix.
>
> What I found when I tested is that it works for test, testcopy
>
> But when I try with myindex, myindexcopy doesn't work
>
> I noticed in the logs when I was trying "myindex" that it was looking for
> an index "test" which was a bit odd
>
> So I copied my "myindex" to an index named literally  "test" and only then
> it worked
> So the only index that can be copied is "test"
> The target index can be anything.
>
> Logs:
>
> [2014-10-22 12:05:07,649][INFO ][KnapsackPushAction   ] start of push:
> {"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}
> [2014-10-22 12:05:07,649][INFO ][KnapsackService  ] update cluster
> settings: plugin.knapsack.export.state ->
> [{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}]
> [2014-10-22 12:05:07,650][INFO ][KnapsackPushAction   ]
> map={myindex=myindexcopy}
> [2014-10-22 12:05:07,650][INFO ][KnapsackPushAction   ] getting
> settings for indices [test, myindex]
> [2014-10-22 12:05:07,651][INFO ][KnapsackPushAction   ] found indices:
> [test, myindex]
> [2014-10-22 12:05:07,652][INFO ][KnapsackPushAction   ] getting
> mappings for index test and types []
> [2014-10-22 12:05:07,652][INFO ][KnapsackPushAction   ] found
> mappings: [test]
> [2014-10-22 12:05:07,653][INFO ][KnapsackPushAction   ] adding
> mapping: test
> [2014-10-22 12:05:07,653][INFO ][KnapsackPushAction   ] creating
> index: test
> [2014-10-22 12:05:07,672][INFO ][KnapsackPushAction   ] count=2
> status=OK
>
> I guess you can put in a quick fix?
>
> I would have to ask if anyone is using this?
>
> And what are most people doing? Are there any plans by "ES" to create a
> product or does the snapshot feature suffice for most people?
>
> Again I just would repeat my requirements: I want to change the mapping
> types for an existing index. Therefore I create my new index and copy the
> old index data into the new.
>
> Thanks in advance.
>
> On Monday, October 20, 2014 8:42:48 PM UTC+1, Jörg Prante wrote:
>
>> I admit there is something overcautious in the knapsack release to
>> prevent overwriting existing data. I will add a fix that will allow writing
>> into an empty index.
>>
>> https://github.com/jprante/elasticsearch-knapsack/issues/57
>>
>> Jörg
>>
>> On Mon, Oct 20, 2014 at 6:47 PM,  wrote:
>>
>>> By the way
>>> Es version 1.3.4
>>> Knapsack version built with 1.3.4
>>>
>>>
>>> Regards.
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/
>>> msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%
>>> 40googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFR_0-%3DOt%3DsY4Y4tt%3D0quh8-%3D7zEBVjAHAKZGppkAuRFA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-22 Thread eunever32
Jorg,

Thanks for the quick turnaround on putting in the fix.

What I found when I tested is that it works for test, testcopy

But when I try with myindex, myindexcopy doesn't work

I noticed in the logs when I was trying "myindex" that it was looking for 
an index "test" which was a bit odd

So I copied my "myindex" to an index named literally  "test" and only then 
it worked
So the only index that can be copied is "test" 
The target index can be anything.

Logs:

[2014-10-22 12:05:07,649][INFO ][KnapsackPushAction   ] start of push: 
{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}
[2014-10-22 12:05:07,649][INFO ][KnapsackService  ] update cluster 
settings: plugin.knapsack.export.state -> 
[{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}]
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction   ] 
map={myindex=myindexcopy}
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction   ] getting 
settings for indices [test, myindex]
[2014-10-22 12:05:07,651][INFO ][KnapsackPushAction   ] found indices: 
[test, myindex]
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction   ] getting 
mappings for index test and types []
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction   ] found mappings: 
[test]
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction   ] adding mapping: 
test
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction   ] creating index: 
test
[2014-10-22 12:05:07,672][INFO ][KnapsackPushAction   ] count=2 
status=OK

I guess you can put in a quick fix?

I would have to ask if anyone is using this?

And what are most people doing? Are there any plans by "ES" to create a 
product or does the snapshot feature suffice for most people?

Again I just would repeat my requirements: I want to change the mapping 
types for an existing index. Therefore I create my new index and copy the 
old index data into the new.

Thanks in advance.

On Monday, October 20, 2014 8:42:48 PM UTC+1, Jörg Prante wrote:

> I admit there is something overcautious in the knapsack release to prevent 
> overwriting existing data. I will add a fix that will allow writing into an 
> empty index.
>
> https://github.com/jprante/elasticsearch-knapsack/issues/57
>
> Jörg
>
> On Mon, Oct 20, 2014 at 6:47 PM, > wrote:
>
>> By the way
>> Es version 1.3.4
>> Knapsack version built with 1.3.4
>>
>>
>> Regards.
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%40googlegroups.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-20 Thread joergpra...@gmail.com
I admit there is something overcautious in the knapsack release to prevent
overwriting existing data. I will add a fix that will allow writing into an
empty index.

https://github.com/jprante/elasticsearch-knapsack/issues/57

Jörg

On Mon, Oct 20, 2014 at 6:47 PM,  wrote:

> By the way
> Es version 1.3.4
> Knapsack version built with 1.3.4
>
>
> Regards.
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFxLmO84ei%3DHFWJDsPKdM_nYvMuKV-V917Xd2_t%2BiGtPw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-20 Thread eunever32
By the way
Es version 1.3.4
Knapsack version built with 1.3.4


Regards. 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-20 Thread eunever32
Okay when I try that I get this error.
It's always at byte 48
Thanks in advance


Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit 
exceeded: 48
 at 
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
 at 
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
 at 
org.elasticsearch.common.io.stream.StreamInput.readVInt(StreamInput.java:141)
 at 
org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:272)
 at 
org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:61)
 at 
org.elasticsearch.common.io.stream.StreamInput.readStringArray(StreamInput.java:362)
 at 
org.elasticsearch.action.admin.cluster.state.ClusterStateRequest.readFrom(ClusterStateRequest.java:132)
 at 
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:209)
 at 
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:109)
 at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
 at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
 at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
 at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
 at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 at 
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
 at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
 at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
 at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
 at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
 at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
 at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
 at 
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 at 
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 at 
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 at 
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)


On Monday, October 20, 2014 4:35:36 PM UTC+1, Jörg Prante wrote:

> The recipe is something like this
>
> 1. install knapsack
>
> 2. create new index. Example
>
> curl -XPUT 'localhost:9200/newindex'
>
> 3. create new mappings
>
> curl -XPUT 'localhost:9200/newindex/newmapping/_mapping' -d '{ ... }'
>
> 4. copy data 
>
> curl -XPOST 
> 'localhost:9200/oldindex/oldmapping/_push?map=\{"oldindex/oldmapping":"newindex/newmapping"\}'
>  
>
> Jörg
>
> On Mon, Oct 20, 2014 at 5:26 PM, > wrote:
>
>>
>> So just to explain what I want: 
>>
>>
>>- I want to be able to "push" an existing index to another index 
>>which has new mappings
>>
>>
>> Is this possible? 
>>
>> Preferably it wouldn't go through an intermediate file-system file: that 
>> would be expensive and might not be enough disk available.
>>
>> Thanks.
>> On Monday, October 20, 2014 4:16:55 PM UTC+1, Jörg Prante wrote:
>>
>>> There is no more parameter "createIndex", the documentation is outdated 
>>> - sorry for the confusion.
>>>
>>> The "_push" action does not use files. There is no need to do that, this 
>>> would be very strange,
>>>
>>> Jörg
>>>
>>>
>>> On Mon, Oct 20, 2014 at 5:12 PM,  wrote:
>>>
 Jorg,

 Not sure what you mean. There is a flag: "createIndex=false" which 
 means : 

 if the index already exists d

Re: copy index

2014-10-20 Thread joergpra...@gmail.com
The recipe is something like this

1. install knapsack

2. create new index. Example

curl -XPUT 'localhost:9200/newindex'

3. create new mappings

curl -XPUT 'localhost:9200/newindex/newmapping/_mapping' -d '{ ... }'

4. copy data

curl -XPOST
'localhost:9200/oldindex/oldmapping/_push?map=\{"oldindex/oldmapping":"newindex/newmapping"\}'

Jörg

On Mon, Oct 20, 2014 at 5:26 PM,  wrote:

>
> So just to explain what I want:
>
>
>- I want to be able to "push" an existing index to another index which
>has new mappings
>
>
> Is this possible?
>
> Preferably it wouldn't go through an intermediate file-system file: that
> would be expensive and might not be enough disk available.
>
> Thanks.
> On Monday, October 20, 2014 4:16:55 PM UTC+1, Jörg Prante wrote:
>
>> There is no more parameter "createIndex", the documentation is outdated -
>> sorry for the confusion.
>>
>> The "_push" action does not use files. There is no need to do that, this
>> would be very strange,
>>
>> Jörg
>>
>>
>> On Mon, Oct 20, 2014 at 5:12 PM,  wrote:
>>
>>> Jorg,
>>>
>>> Not sure what you mean. There is a flag: "createIndex=false" which means
>>> :
>>>
>>> if the index already exists do not try to create it ie it is pre-created.
>>>
>>> Import will handle this. Will _push also ?
>>>
>>> I have another question which affects me:
>>> I was hoping that "_push" would write to the index without using an
>>> intermediate file. But it seems behind the scenes it uses the filesystem
>>> like export/import. Can you confirm?
>>>
>>> Regards,
>>>
>>> On Sunday, October 19, 2014 9:14:57 PM UTC+1, Jörg Prante wrote:
>>>
 I never thought about something like "pre-creation"  because it would
 just double the existing create index action...


>>>
 Jörg

 On Sun, Oct 19, 2014 at 6:00 PM,  wrote:

> OK I can try that
> But is there an option in the _push to have a pre created index?
>
> I know it's possible with import createIndex=false
>
> Would export/import be just as good?
>
> --
> You received this message because you are subscribed to the Google
> Groups "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to elasticsearc...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%40goo
> glegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

  --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/
>>> msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%
>>> 40googlegroups.com
>>> 
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/87f3057e-7c89-417f-900a-3e6b2f10ffd6%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoE1a7KWm5BHGAGn3BbshKwKYL7RLzTV7unjJ%3D4RnknK%3Dw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-20 Thread eunever32

So just to explain what I want: 


   - I want to be able to "push" an existing index to another index which 
   has new mappings


Is this possible? 

Preferably it wouldn't go through an intermediate file-system file: that 
would be expensive and might not be enough disk available.

Thanks.
On Monday, October 20, 2014 4:16:55 PM UTC+1, Jörg Prante wrote:

> There is no more parameter "createIndex", the documentation is outdated - 
> sorry for the confusion.
>
> The "_push" action does not use files. There is no need to do that, this 
> would be very strange,
>
> Jörg
>
>
> On Mon, Oct 20, 2014 at 5:12 PM, > wrote:
>
>> Jorg,
>>
>> Not sure what you mean. There is a flag: "createIndex=false" which means 
>> : 
>>
>> if the index already exists do not try to create it ie it is pre-created.
>>
>> Import will handle this. Will _push also ?
>>
>> I have another question which affects me: 
>> I was hoping that "_push" would write to the index without using an 
>> intermediate file. But it seems behind the scenes it uses the filesystem 
>> like export/import. Can you confirm?
>>
>> Regards,
>>
>> On Sunday, October 19, 2014 9:14:57 PM UTC+1, Jörg Prante wrote:
>>
>>> I never thought about something like "pre-creation"  because it would 
>>> just double the existing create index action...
>>>
>>>  
>>
>>> Jörg
>>>
>>> On Sun, Oct 19, 2014 at 6:00 PM,  wrote:
>>>
 OK I can try that
 But is there an option in the _push to have a pre created index?

 I know it's possible with import createIndex=false

 Would export/import be just as good?

 --
 You received this message because you are subscribed to the Google 
 Groups "elasticsearch" group.
 To unsubscribe from this group and stop receiving emails from it, send 
 an email to elasticsearc...@googlegroups.com.
 To view this discussion on the web visit https://groups.google.com/d/
 msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%
 40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.

>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%40googlegroups.com
>>  
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/87f3057e-7c89-417f-900a-3e6b2f10ffd6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-20 Thread joergpra...@gmail.com
There is no more parameter "createIndex", the documentation is outdated -
sorry for the confusion.

The "_push" action does not use files. There is no need to do that, this
would be very strange,

Jörg


On Mon, Oct 20, 2014 at 5:12 PM,  wrote:

> Jorg,
>
> Not sure what you mean. There is a flag: "createIndex=false" which means :
>
> if the index already exists do not try to create it ie it is pre-created.
>
> Import will handle this. Will _push also ?
>
> I have another question which affects me:
> I was hoping that "_push" would write to the index without using an
> intermediate file. But it seems behind the scenes it uses the filesystem
> like export/import. Can you confirm?
>
> Regards,
>
> On Sunday, October 19, 2014 9:14:57 PM UTC+1, Jörg Prante wrote:
>
>> I never thought about something like "pre-creation"  because it would
>> just double the existing create index action...
>>
>>
>
>> Jörg
>>
>> On Sun, Oct 19, 2014 at 6:00 PM,  wrote:
>>
>>> OK I can try that
>>> But is there an option in the _push to have a pre created index?
>>>
>>> I know it's possible with import createIndex=false
>>>
>>> Would export/import be just as good?
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit https://groups.google.com/d/
>>> msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%
>>> 40googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH92bkAf%3DWaQNfG4Nua2r24HkbX3TkBedXQ5fHz6z1zjA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-20 Thread eunever32
Jorg,

Not sure what you mean. There is a flag: "createIndex=false" which means : 

if the index already exists do not try to create it ie it is pre-created.

Import will handle this. Will _push also ?

I have another question which affects me: 
I was hoping that "_push" would write to the index without using an 
intermediate file. But it seems behind the scenes it uses the filesystem 
like export/import. Can you confirm?

Regards,

On Sunday, October 19, 2014 9:14:57 PM UTC+1, Jörg Prante wrote:

> I never thought about something like "pre-creation"  because it would just 
> double the existing create index action...
>
>  

> Jörg
>
> On Sun, Oct 19, 2014 at 6:00 PM, > wrote:
>
>> OK I can try that
>> But is there an option in the _push to have a pre created index?
>>
>> I know it's possible with import createIndex=false
>>
>> Would export/import be just as good?
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%40googlegroups.com
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-19 Thread joergpra...@gmail.com
I never thought about something like "pre-creation"  because it would just
double the existing create index action...

Jörg

On Sun, Oct 19, 2014 at 6:00 PM,  wrote:

> OK I can try that
> But is there an option in the _push to have a pre created index?
>
> I know it's possible with import createIndex=false
>
> Would export/import be just as good?
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcsTNcJWK%3DhgRehUFD4ip17JfCaqQ3DJ4xJThq3XLgWg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-19 Thread eunever32
OK I can try that
But is there an option in the _push to have a pre created index?

I know it's possible with import createIndex=false

Would export/import be just as good?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-19 Thread joergpra...@gmail.com
A better idea than fiddling with mapping in the HTTP GET/POST parameter is
to pre-create an empty target index like you want, and after that, push the
docs with a knapsack command by the "map" parameter.

I had also the idea to redesign the knapsack arguments from GET/POST
parameter names to structured POST request bodies in JSON, so sense would
be helpful in JSON editing. Since knapsack is not part of standard ES I
doubt there will be syntax-check assistance.

Jörg



On Sun, Oct 19, 2014 at 2:55 AM,  wrote:

> Jorg,
>
> That is exactly the kind of thing I'm looking for.
>
> I'm having a little bit of difficulty getting it to do what I want.
>
> I want to "push" an index to another index and change the mapping.
>
> I can import / export okay but the push is having difficulty picking up
> the new mappings.
>
> The syntax for push seems to be to specify the name of the mapping file
> which in may case is in /tmp/testpu_doc_mapping.json
>
> and this contains:
> {
>  "doc": {
> "_timestamp": {
>"enabled": true,
>"store": true,
>"path": "date"
> },
> "properties": {
>"date": {
>   "type": "date",
>   "format": "dateOptionalTime"
>},
>"sentence": {
>   "type": "string",
> *  "index": "not_analyzed"*
>},
>"value": {
>   "type": "long"
>}
> }
>  }
>
>
> }
>
> Note I want sentence to be *not_analyzed*
> Maybe syntax of above file is not correct?
> I tried other variations.
> And when it says add mapping _default : that's probably not a good sign?
>
> I then issue command:
>
> curl -XPOST
> 'localhost:9200/test/_push?map=\{"test":"testpu"\}&\{"test_doc_mapping":"/tmp/testpu_doc_mapping.json"\}'
> But this is clearly wrong
> Server shows:
>
>> [2014-10-19 01:10:34,216][INFO ][BaseTransportClient  ] creating
>> transport client, java version 1.7.0_40, effective
>> settings {host=localhost, port=9300, cluster.name=elasticsearch,
>> timeout=30s, client.transport.sniff=true, client.transp
>> ort.ping_timeout=30s, client.transport.ignore_cluster_name=true,
>> path.plugins=.dontexist}
>> [2014-10-19 01:10:34,218][INFO ][plugins  ] [Left Hand]
>> loaded [], sites []
>> [2014-10-19 01:10:34,238][INFO ][BaseTransportClient  ] transport
>> client settings = {host=localhost, port=9300, clus
>> ter.name=elasticsearch, timeout=30s, client.transport.sniff=true,
>> client.transport.ping_timeout=30s, client.transport.ig
>> nore_cluster_name=true, path.plugins=.dontexist,
>> path.home=C:\elasticsearch-1.3.4, name=Left Hand, path.logs=C:/elastics
>> earch-1.3.4/logs, network.server=false, node.client=true}
>> [2014-10-19 01:10:34,239][INFO ][BaseTransportClient  ] adding custom
>> address for transport client: inet[localhost/1
>> 27.0.0.1:9300]
>> [2014-10-19 01:10:34,246][INFO ][BaseTransportClient  ] configured
>> addresses to connect = [inet[localhost/127.0.0.1:
>> 9300]], waiting for 30 seconds to connect ...
>> [2014-10-19 01:11:04,247][INFO ][BaseTransportClient  ] connected
>> nodes = [[Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity][ine
>> t[/192.168.43.250:9300]],
>> [#transport#-1][zippity][inet[localhost/127.0.0.1:9300]]]
>> [2014-10-19 01:11:04,247][INFO ][BaseTransportClient  ] new
>> connection to [Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity][inet
>> [/192.168.43.250:9300]]
>> [2014-10-19 01:11:04,248][INFO ][BaseTransportClient  ] new
>> connection to [#transport#-1][zippity][inet[localhost/127.0
>> .0.1:9300]]
>> [2014-10-19 01:11:04,248][INFO ][BaseTransportClient  ] trying to
>> discover more nodes...
>> [2014-10-19 01:11:04,254][INFO ][BaseTransportClient  ] adding
>> discovered node [Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity]
>> [inet[/192.168.43.250:9300]]
>> [2014-10-19 01:11:04,258][INFO ][BaseTransportClient  ] ... discovery
>> done
>> [2014-10-19 01:11:04,259][INFO ][KnapsackService  ] add:
>> plugin.knapsack.export.state -> []
>> [2014-10-19 01:11:04,259][INFO ][KnapsackPushAction   ] start of
>> push: {"mode":"push","started":"2014-10-19T00:11:04
>> .259Z","node_name":"Logan"}
>> [2014-10-19 01:11:04,259][INFO ][KnapsackService  ] update
>> cluster settings: plugin.knapsack.export.state -> [{"
>> mode":"push","started":"2014-10-19T00:11:04.259Z","node_name":"Logan"}]
>> [2014-10-19 01:11:04,259][INFO ][KnapsackPushAction   ]
>> map={test=testpu}
>> [2014-10-19 01:11:04,260][INFO ][KnapsackPushAction   ] getting
>> settings for indices [test]
>> [2014-10-19 01:11:04,261][INFO ][KnapsackPushAction   ] found
>> indices: [test]
>> [2014-10-19 01:11:04,261][INFO ][KnapsackPushAction   ] getting
>> mappings for index test and types []
>> [2014-10-19 01:11:04,262][INFO ][KnapsackPushAction   ] found
>> mappings: [_default_, doc]
>> [2014-10-19 01:11:04,

Re: copy index

2014-10-18 Thread eunever32
Jorg,

That is exactly the kind of thing I'm looking for.

I'm having a little bit of difficulty getting it to do what I want.

I want to "push" an index to another index and change the mapping.

I can import / export okay but the push is having difficulty picking up the 
new mappings.

The syntax for push seems to be to specify the name of the mapping file 
which in may case is in /tmp/testpu_doc_mapping.json

and this contains: 
{
 "doc": {
"_timestamp": {
   "enabled": true,
   "store": true,
   "path": "date"
},
"properties": {
   "date": {
  "type": "date",
  "format": "dateOptionalTime"
   },
   "sentence": {
  "type": "string",
*  "index": "not_analyzed"*
   },
   "value": {
  "type": "long"
   }
}
 }
  
   
}

Note I want sentence to be *not_analyzed*
Maybe syntax of above file is not correct?
I tried other variations.
And when it says add mapping _default : that's probably not a good sign?

I then issue command: 

curl -XPOST 
'localhost:9200/test/_push?map=\{"test":"testpu"\}&\{"test_doc_mapping":"/tmp/testpu_doc_mapping.json"\}'
But this is clearly wrong
Server shows: 

> [2014-10-19 01:10:34,216][INFO ][BaseTransportClient  ] creating 
> transport client, java version 1.7.0_40, effective
> settings {host=localhost, port=9300, cluster.name=elasticsearch, 
> timeout=30s, client.transport.sniff=true, client.transp
> ort.ping_timeout=30s, client.transport.ignore_cluster_name=true, 
> path.plugins=.dontexist}
> [2014-10-19 01:10:34,218][INFO ][plugins  ] [Left Hand] 
> loaded [], sites []
> [2014-10-19 01:10:34,238][INFO ][BaseTransportClient  ] transport 
> client settings = {host=localhost, port=9300, clus
> ter.name=elasticsearch, timeout=30s, client.transport.sniff=true, 
> client.transport.ping_timeout=30s, client.transport.ig
> nore_cluster_name=true, path.plugins=.dontexist, 
> path.home=C:\elasticsearch-1.3.4, name=Left Hand, path.logs=C:/elastics
> earch-1.3.4/logs, network.server=false, node.client=true}
> [2014-10-19 01:10:34,239][INFO ][BaseTransportClient  ] adding custom 
> address for transport client: inet[localhost/1
> 27.0.0.1:9300]
> [2014-10-19 01:10:34,246][INFO ][BaseTransportClient  ] configured 
> addresses to connect = [inet[localhost/127.0.0.1:
> 9300]], waiting for 30 seconds to connect ...
> [2014-10-19 01:11:04,247][INFO ][BaseTransportClient  ] connected 
> nodes = [[Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity][ine
> t[/192.168.43.250:9300]], 
> [#transport#-1][zippity][inet[localhost/127.0.0.1:9300]]]
> [2014-10-19 01:11:04,247][INFO ][BaseTransportClient  ] new connection 
> to [Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity][inet
> [/192.168.43.250:9300]]
> [2014-10-19 01:11:04,248][INFO ][BaseTransportClient  ] new connection 
> to [#transport#-1][zippity][inet[localhost/127.0
> .0.1:9300]]
> [2014-10-19 01:11:04,248][INFO ][BaseTransportClient  ] trying to 
> discover more nodes...
> [2014-10-19 01:11:04,254][INFO ][BaseTransportClient  ] adding 
> discovered node [Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity]
> [inet[/192.168.43.250:9300]]
> [2014-10-19 01:11:04,258][INFO ][BaseTransportClient  ] ... discovery 
> done
> [2014-10-19 01:11:04,259][INFO ][KnapsackService  ] add: 
> plugin.knapsack.export.state -> []
> [2014-10-19 01:11:04,259][INFO ][KnapsackPushAction   ] start of push: 
> {"mode":"push","started":"2014-10-19T00:11:04
> .259Z","node_name":"Logan"}
> [2014-10-19 01:11:04,259][INFO ][KnapsackService  ] update cluster 
> settings: plugin.knapsack.export.state -> [{"
> mode":"push","started":"2014-10-19T00:11:04.259Z","node_name":"Logan"}]
> [2014-10-19 01:11:04,259][INFO ][KnapsackPushAction   ] 
> map={test=testpu}
> [2014-10-19 01:11:04,260][INFO ][KnapsackPushAction   ] getting 
> settings for indices [test]
> [2014-10-19 01:11:04,261][INFO ][KnapsackPushAction   ] found indices: 
> [test]
> [2014-10-19 01:11:04,261][INFO ][KnapsackPushAction   ] getting 
> mappings for index test and types []
> [2014-10-19 01:11:04,262][INFO ][KnapsackPushAction   ] found 
> mappings: [_default_, doc]
> [2014-10-19 01:11:04,263][INFO ][KnapsackPushAction   ] adding 
> mapping: _default_
> [2014-10-19 01:11:04,263][INFO ][KnapsackPushAction   ] adding 
> mapping: doc
> [2014-10-19 01:11:04,263][INFO ][KnapsackPushAction   ] creating 
> index: testpu
> [2014-10-19 01:11:04,296][INFO ][cluster.metadata ] [Logan] 
> [testpu] creating index, cause [api], shards [5]/[1]
> , mappings [_default_, doc]
> [2014-10-19 01:11:04,374][INFO ][KnapsackPushAction   ] index created: 
> testpu
> [2014-10-19 01:11:04,374][INFO ][KnapsackPushAction   ] getting 
> aliases for index test
> [2014-10-19 01:11:04,374][INFO ][KnapsackPus

Re: copy index

2014-10-17 Thread joergpra...@gmail.com
You can use the knapsack plugin for export/import data and change mappings
(and much more!)

For a 1:1 online copy, just one curl command is necessary, yes.

https://github.com/jprante/elasticsearch-knapsack

Jörg

On Thu, Oct 16, 2014 at 7:55 PM,  wrote:

> Hi
>
> I can see there are lots of utilities to copy the contents of an index
> such as
> elasticdump
> reindexer
> streames
> etc
>
> And they mostly use scan scroll.
>
> Is there a single curl command to copy an index to a new index?
>
> Without too much investigation it looks like scan scroll requires repeated
> calls?
>
> Can you please confirm?
>
> If this is the case what is the simplest supported utility?
>
> Alternatively is there a plugin with front end to choose from and to index?
>
> Thanks in advance
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/1caeebf5-44de-4eba-ad5a-c702461bf3d2%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHaUo1mF5xjjyvObT7MoXkiu20WrN1kJi-uPt1oOFdKEA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-16 Thread eunever32
I should have mentioned:

The point is to copy the data only

And then to change the mappings

Snapshot no use sorry because that brings the mappings

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6ba216e5-f409-445c-b278-d306f252d022%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: copy index

2014-10-16 Thread Bobby Pilot
Have you tried taking a snapshot 
and
 
restoring  
the
 
index to a new name (see rename_pattern)? 

On Thursday, October 16, 2014 12:55:02 PM UTC-5, eune...@gmail.com wrote:
>
> Hi
>
> I can see there are lots of utilities to copy the contents of an index 
> such as
> elasticdump
> reindexer
> streames
> etc
>
> And they mostly use scan scroll.
>
> Is there a single curl command to copy an index to a new index? 
>
> Without too much investigation it looks like scan scroll requires repeated 
> calls? 
>
> Can you please confirm? 
>
> If this is the case what is the simplest supported utility? 
>
> Alternatively is there a plugin with front end to choose from and to 
> index? 
>
> Thanks in advance
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/52203c52-f1d8-4770-9f01-1fce905d26f2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


copy index

2014-10-16 Thread eunever32
Hi

I can see there are lots of utilities to copy the contents of an index such as
elasticdump
reindexer
streames
etc

And they mostly use scan scroll.

Is there a single curl command to copy an index to a new index? 

Without too much investigation it looks like scan scroll requires repeated 
calls? 

Can you please confirm? 

If this is the case what is the simplest supported utility? 

Alternatively is there a plugin with front end to choose from and to index? 

Thanks in advance

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1caeebf5-44de-4eba-ad5a-c702461bf3d2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Copy index from production to development instance

2014-06-26 Thread Himanshu Agrawal
>From the data you have provided I see that your bucket and keys for
development and production are different. Point your development
elasticsearch instance to the same AWS account and bucket in which you are
storing the snapshot.
On Jun 26, 2014 9:15 PM, "Brian Lamb"  wrote:

> Thank you for your suggestion. I tried the stream2es library but I get a
> OutOfMemoryError when trying to use that.
>
> On Friday, June 6, 2014 5:13:19 PM UTC-4, Antonio Augusto Santos wrote:
>>
>> Take a look at stream2es https://github.com/elasticsearch/stream2es
>>
>> On Friday, June 6, 2014 2:13:06 PM UTC-3, Brian Lamb wrote:
>>>
>>> I should also point out that I had to edit a file in the
>>> metadata-snapshot file to change around the s3 keys and bucket name to
>>> match what development was expecting.
>>>
>>> On Friday, June 6, 2014 1:11:57 PM UTC-4, Brian Lamb wrote:

 Hi all,

 I want to do a one time copy of the data on my production elastic
 search instance to my development elastic search instance. Both are managed
 by AWS if that makes this easier. Here is what I tried:

 On production:

 curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{
 "type": "s3",
 "settings": {
 "access_key": "productionAccessKey",
 "bucket": "productionBucketName",
 "region": "region",
 "secret_key": "productionSecretKey"
 }
 }'
 curl -XPUT "http://localhost:9200/_snapshot/my_s3_repository/
 snapshot_2014_06_02"

 What this does is upload the instance to a production level s3 bucket.

 Then in the aws console, I copy all of it to a development level s3
 bucket.

 Next on development:

 curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{
 "type": "s3",
 "settings": {
 "access_key": "developmentAccessKey",
 "bucket": "developmentBucketName",
 "region": "region",
 "secret_key": "developmentSecretKey"
 }
 }'
 curl -XPOST "http://localhost:9200/_snapshot/my_s3_repository/
 snapshot_2014_06_02/_restore"

 This gives me the following message:

 $ curl -XPOST "http://localhost:9200/_snapshot/my_s3_repository/
 snapshot_2014_06_02/_restore?pretty=true"
 {
   "error" : "SnapshotException[[my_s3_repository:snapshot_2014_06_02]
 failed to get snapshots]; nested: IOException[Failed to get
 [snapshot-snapshot_2014_06_02]]; nested: AmazonS3Exception[Status
 Code: 404, AWS Service: Amazon S3, AWS Request ID: RequestId, AWS Error
 Code: NoSuchKey, AWS Error Message: The specified key does not exist.]; ",
   "status" : 500
 }

 Also, when I try to get the snapshots, I get the following:

 $ curl -XGET "localhost:9200/_snapshot/_status?pretty=true"
 {
   "snapshots" : [ ]
 }

 This leads me to believe that I am not connecting the snapshot
 correctly but I'm not sure what I am doing incorrectly. Regenerating the
 index on development is not really a possibility as it took a few months to
 generate the index the first time around. If there is a better way to do
 this, I'm all for it.

 Thanks,

 Brian Lamb

>>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/0ed279b6-599f-4c90-917a-d377622e12cd%40googlegroups.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CACTSS%3D6%3DP-Mc%2B4_o-ROAx0qDtqpRAk4f3VH4OfRSGSbVs61Uwg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Copy index from production to development instance

2014-06-26 Thread Brian Lamb
Thank you for your suggestion. I tried the stream2es library but I get a 
OutOfMemoryError when trying to use that.

On Friday, June 6, 2014 5:13:19 PM UTC-4, Antonio Augusto Santos wrote:
>
> Take a look at stream2es https://github.com/elasticsearch/stream2es
>
> On Friday, June 6, 2014 2:13:06 PM UTC-3, Brian Lamb wrote:
>>
>> I should also point out that I had to edit a file in the 
>> metadata-snapshot file to change around the s3 keys and bucket name to 
>> match what development was expecting.
>>
>> On Friday, June 6, 2014 1:11:57 PM UTC-4, Brian Lamb wrote:
>>>
>>> Hi all,
>>>
>>> I want to do a one time copy of the data on my production elastic search 
>>> instance to my development elastic search instance. Both are managed by AWS 
>>> if that makes this easier. Here is what I tried:
>>>
>>> On production:
>>>
>>> curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{
>>> "type": "s3",
>>> "settings": {
>>> "access_key": "productionAccessKey",
>>> "bucket": "productionBucketName",
>>> "region": "region",
>>> "secret_key": "productionSecretKey"
>>> }
>>> }'
>>> curl -XPUT "
>>> http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02";
>>>
>>> What this does is upload the instance to a production level s3 bucket.
>>>
>>> Then in the aws console, I copy all of it to a development level s3 
>>> bucket.
>>>
>>> Next on development:
>>>
>>> curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{
>>> "type": "s3",
>>> "settings": {
>>> "access_key": "developmentAccessKey",
>>> "bucket": "developmentBucketName",
>>> "region": "region",
>>> "secret_key": "developmentSecretKey"
>>> }
>>> }'
>>> curl -XPOST "
>>> http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02/_restore
>>> "
>>>
>>> This gives me the following message:
>>>
>>> $ curl -XPOST "
>>> http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02/_restore?pretty=true
>>> "
>>> {
>>>   "error" : "SnapshotException[[my_s3_repository:snapshot_2014_06_02] 
>>> failed to get snapshots]; nested: IOException[Failed to get 
>>> [snapshot-snapshot_2014_06_02]]; nested: AmazonS3Exception[Status Code: 
>>> 404, AWS Service: Amazon S3, AWS Request ID: RequestId, AWS Error Code: 
>>> NoSuchKey, AWS Error Message: The specified key does not exist.]; ",
>>>   "status" : 500
>>> }
>>>
>>> Also, when I try to get the snapshots, I get the following:
>>>
>>> $ curl -XGET "localhost:9200/_snapshot/_status?pretty=true"
>>> {
>>>   "snapshots" : [ ]
>>> }
>>>
>>> This leads me to believe that I am not connecting the snapshot correctly 
>>> but I'm not sure what I am doing incorrectly. Regenerating the index on 
>>> development is not really a possibility as it took a few months to generate 
>>> the index the first time around. If there is a better way to do this, I'm 
>>> all for it. 
>>>
>>> Thanks,
>>>
>>> Brian Lamb
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0ed279b6-599f-4c90-917a-d377622e12cd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Copy index from production to development instance

2014-06-06 Thread Antonio Augusto Santos
Take a look at stream2es https://github.com/elasticsearch/stream2es

On Friday, June 6, 2014 2:13:06 PM UTC-3, Brian Lamb wrote:
>
> I should also point out that I had to edit a file in the metadata-snapshot 
> file to change around the s3 keys and bucket name to match what development 
> was expecting.
>
> On Friday, June 6, 2014 1:11:57 PM UTC-4, Brian Lamb wrote:
>>
>> Hi all,
>>
>> I want to do a one time copy of the data on my production elastic search 
>> instance to my development elastic search instance. Both are managed by AWS 
>> if that makes this easier. Here is what I tried:
>>
>> On production:
>>
>> curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{
>> "type": "s3",
>> "settings": {
>> "access_key": "productionAccessKey",
>> "bucket": "productionBucketName",
>> "region": "region",
>> "secret_key": "productionSecretKey"
>> }
>> }'
>> curl -XPUT "
>> http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02";
>>
>> What this does is upload the instance to a production level s3 bucket.
>>
>> Then in the aws console, I copy all of it to a development level s3 
>> bucket.
>>
>> Next on development:
>>
>> curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{
>> "type": "s3",
>> "settings": {
>> "access_key": "developmentAccessKey",
>> "bucket": "developmentBucketName",
>> "region": "region",
>> "secret_key": "developmentSecretKey"
>> }
>> }'
>> curl -XPOST "
>> http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02/_restore
>> "
>>
>> This gives me the following message:
>>
>> $ curl -XPOST "
>> http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02/_restore?pretty=true
>> "
>> {
>>   "error" : "SnapshotException[[my_s3_repository:snapshot_2014_06_02] 
>> failed to get snapshots]; nested: IOException[Failed to get 
>> [snapshot-snapshot_2014_06_02]]; nested: AmazonS3Exception[Status Code: 
>> 404, AWS Service: Amazon S3, AWS Request ID: RequestId, AWS Error Code: 
>> NoSuchKey, AWS Error Message: The specified key does not exist.]; ",
>>   "status" : 500
>> }
>>
>> Also, when I try to get the snapshots, I get the following:
>>
>> $ curl -XGET "localhost:9200/_snapshot/_status?pretty=true"
>> {
>>   "snapshots" : [ ]
>> }
>>
>> This leads me to believe that I am not connecting the snapshot correctly 
>> but I'm not sure what I am doing incorrectly. Regenerating the index on 
>> development is not really a possibility as it took a few months to generate 
>> the index the first time around. If there is a better way to do this, I'm 
>> all for it. 
>>
>> Thanks,
>>
>> Brian Lamb
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/52168c96-30ea-4527-b287-676e757b1e6a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Copy index from production to development instance

2014-06-06 Thread Brian Lamb
I should also point out that I had to edit a file in the metadata-snapshot 
file to change around the s3 keys and bucket name to match what development 
was expecting.

On Friday, June 6, 2014 1:11:57 PM UTC-4, Brian Lamb wrote:
>
> Hi all,
>
> I want to do a one time copy of the data on my production elastic search 
> instance to my development elastic search instance. Both are managed by AWS 
> if that makes this easier. Here is what I tried:
>
> On production:
>
> curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{
> "type": "s3",
> "settings": {
> "access_key": "productionAccessKey",
> "bucket": "productionBucketName",
> "region": "region",
> "secret_key": "productionSecretKey"
> }
> }'
> curl -XPUT "
> http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02";
>
> What this does is upload the instance to a production level s3 bucket.
>
> Then in the aws console, I copy all of it to a development level s3 bucket.
>
> Next on development:
>
> curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{
> "type": "s3",
> "settings": {
> "access_key": "developmentAccessKey",
> "bucket": "developmentBucketName",
> "region": "region",
> "secret_key": "developmentSecretKey"
> }
> }'
> curl -XPOST "
> http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02/_restore
> "
>
> This gives me the following message:
>
> $ curl -XPOST "
> http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02/_restore?pretty=true
> "
> {
>   "error" : "SnapshotException[[my_s3_repository:snapshot_2014_06_02] 
> failed to get snapshots]; nested: IOException[Failed to get 
> [snapshot-snapshot_2014_06_02]]; nested: AmazonS3Exception[Status Code: 
> 404, AWS Service: Amazon S3, AWS Request ID: RequestId, AWS Error Code: 
> NoSuchKey, AWS Error Message: The specified key does not exist.]; ",
>   "status" : 500
> }
>
> Also, when I try to get the snapshots, I get the following:
>
> $ curl -XGET "localhost:9200/_snapshot/_status?pretty=true"
> {
>   "snapshots" : [ ]
> }
>
> This leads me to believe that I am not connecting the snapshot correctly 
> but I'm not sure what I am doing incorrectly. Regenerating the index on 
> development is not really a possibility as it took a few months to generate 
> the index the first time around. If there is a better way to do this, I'm 
> all for it. 
>
> Thanks,
>
> Brian Lamb
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/09fd8162-b39d-4b8d-83bc-c011e4d8bf05%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Copy index from production to development instance

2014-06-06 Thread Brian Lamb
Hi all,

I want to do a one time copy of the data on my production elastic search 
instance to my development elastic search instance. Both are managed by AWS 
if that makes this easier. Here is what I tried:

On production:

curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{
"type": "s3",
"settings": {
"access_key": "productionAccessKey",
"bucket": "productionBucketName",
"region": "region",
"secret_key": "productionSecretKey"
}
}'
curl -XPUT 
"http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02";

What this does is upload the instance to a production level s3 bucket.

Then in the aws console, I copy all of it to a development level s3 bucket.

Next on development:

curl -XPUT 'http://localhost:9200/_snapshot/my_s3_repository' -d '{
"type": "s3",
"settings": {
"access_key": "developmentAccessKey",
"bucket": "developmentBucketName",
"region": "region",
"secret_key": "developmentSecretKey"
}
}'
curl -XPOST 
"http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02/_restore";

This gives me the following message:

$ curl -XPOST 
"http://localhost:9200/_snapshot/my_s3_repository/snapshot_2014_06_02/_restore?pretty=true";
{
  "error" : "SnapshotException[[my_s3_repository:snapshot_2014_06_02] 
failed to get snapshots]; nested: IOException[Failed to get 
[snapshot-snapshot_2014_06_02]]; nested: AmazonS3Exception[Status Code: 
404, AWS Service: Amazon S3, AWS Request ID: RequestId, AWS Error Code: 
NoSuchKey, AWS Error Message: The specified key does not exist.]; ",
  "status" : 500
}

Also, when I try to get the snapshots, I get the following:

$ curl -XGET "localhost:9200/_snapshot/_status?pretty=true"
{
  "snapshots" : [ ]
}

This leads me to believe that I am not connecting the snapshot correctly 
but I'm not sure what I am doing incorrectly. Regenerating the index on 
development is not really a possibility as it took a few months to generate 
the index the first time around. If there is a better way to do this, I'm 
all for it. 

Thanks,

Brian Lamb

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/b5266abf-2ff4-44b3-ba25-734b50d99e83%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.