://groups.google.com/d/optout.
--
Honza Král
Python Engineer
honza.k...@elastic.co
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email
to elasticsearch+unsubscr...@googlegroups.com
happy to provide a more
complete example
Best regards,
Mike
On Monday, March 30, 2015 at 11:06:28 PM UTC+2, Honza Král wrote:
Hello,
you can access buckets already created using ['name'] syntax, in your
case you can do (instead of the chaining):
s.aggs['xColor']['xMake']['xCity
=footer
.
For more options, visit https://groups.google.com/d/optout.
--
Honza Král
Python Engineer
honza.k...@elastic.co
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send
%40googlegroups.com?utm_medium=emailutm_source=footer
.
For more options, visit https://groups.google.com/d/optout.
--
Honza Král
Python Engineer
honza.k...@elastic.co
--
You received this message because you are subscribed to the Google Groups
elasticsearch group.
To unsubscribe from
ats 1 0 23180 7820 2.1mb
2.1mb
When set it be 5000, docs.deleted is 7571.
Did we do something wrong?
thanks,
Cong
On Monday, February 23, 2015 at 11:00:12 AM UTC-8, Honza Král wrote:
This is definitely not the limitation of the bulk api, nor the python
library
This is definitely not the limitation of the bulk api, nor the python
library, have you seen the python bulk helper? it might solve some of
the issues for you.
http://elasticsearch-py.readthedocs.org/en/latest/helpers.html#elasticsearch.helpers.streaming_bulk
On Mon, Feb 23, 2015 at 7:21 PM,
The python client has a reindex helper that can do just that - just
supply a client instance for the source and destination clusters.
http://elasticsearch-py.readthedocs.org/en/latest/helpers.html#elasticsearch.helpers.reindex
Hope this helps,
Honza
On Thu, Feb 19, 2015 at 11:44 PM, Amay Patil
Hi James,
you can index documents even without creating an index - it will be
created for you, but when searching the index must exist.
Hope this helps.
On Thu, Feb 5, 2015 at 1:04 PM, James m...@employ.com wrote:
Hi,
I'm trying to get elasticsearch working, querying it from python. However
Hi,
the connection is lazy so it will only be opened once you make a
request - just instantiating a client like this will not create any
connections.
Once a connection is created the python client will try to hold on to
it as long as possible (it uses urllib3 to do the connection pooling
itself)
Hi Antonio,
you might dave to use:
F(range, **{'@timestamp': {from:1413815328968, to:1413901728968}})
or
F({range: {@timestamp: {from:1413815328968, to:1413901728968}}})
both of these should work.
On Tue, Oct 21, 2014 at 4:32 PM, Antonio Augusto Santos
mkha...@gmail.com wrote:
Hi,
I'm
, 2014 9:46:37 PM UTC+3, Honza Král wrote:
Hi Costya,
the code actually looks for parent and should move it around. Are you
sure you have your mappings set up correctly for the new index to
include the parent/child relationship?
Thanks
On Sun, Oct 12, 2014 at 5:30 PM, Costya Regev cos
Hi Bruno,
this definitely shouldn't be happening. Could you turn on your logging
for the python library to see what's going on? Just enabling the
`elasticsearch` logger in python logging module should do the trick.
Thanks!
On Mon, Oct 6, 2014 at 7:21 PM, Bruno Ribeiro da Silva
Hello Henrik,
My guess would be that the timeouts you are seeing in python are
causing this - when the python client encounters a timeout, it retries
the request, thinking something went wrong. Thus if your timeout is
too small it can actually lead to spamming the cluster - when the
python client
what is the curl comman you use to reach elasticsearch?
On Fri, Sep 12, 2014 at 11:51 AM, Nimit Jain online.ni...@gmail.com wrote:
With the same URL I am able to get the json from curl command.
Full url is http://10.xxx.66.xxx:6xxx8/ea/api/discovery.json but with
elasticsearch the Status is
now. Could you please tell me the way to
provide username and password using elasticsearch.
It would be very helpful.
Regards,
Nimit
On Friday, 12 September 2014 15:39:28 UTC+5:30, Honza Král wrote:
what is the curl comman you use to reach elasticsearch?
On Fri, Sep 12, 2014 at 11:51 AM
Hi,
the code you have here should work, what do you get when you try:
from elasticsearch import Elasticsearch
es = Elasticsearch(10.120.xx.xxx:6xxx8)
print(es.info())
Thanks
On Thu, Sep 11, 2014 at 11:14 AM, Nimit Jain online.ni...@gmail.com wrote:
Hi All,
I need I need to call my server
Nice!
Have you looked at Warehouse (0)? It's a similar effort by the pypa
initiative, also using elasticsearch.
Honza
0 - https://github.com/pypa/warehouse
On Fri, Jul 18, 2014 at 6:58 AM, Maciej Dziardziel fied...@gmail.com wrote:
Hi
Being frustrated with speed and inflexibility of pip
Hi Brian,
you seem to have hit an issue we have had with curator, there are some
solutions and workarounds on the github issue:
https://github.com/elasticsearch/curator/issues/77
hope this helps,
Honza
On Thu, Jul 17, 2014 at 6:22 AM, Brian brian.from...@gmail.com wrote:
No joy:
$ pip
Hi,
what method are you using in your python script? Have you looked at
the bulk and streaming_bulk helpers in ealsticsearch-py?
http://elasticsearch-py.readthedocs.org/en/master/helpers.html
Hope this helps,
Honza
On Thu, May 22, 2014 at 11:09 AM, 潘飞 cnwe...@gmail.com wrote:
Hi all:
Now ,
Hi Brian,
that message you are seeing is not an error - it's a warning from the
python logging system that you don't have any logging configured. So
when elasticsearch tries to log something it cannot.
I'd suggest to set up your logging and try again. To set up logging
just include:
import
and error those.
Axel
Am Samstag, 12. April 2014 02:03:24 UTC+2 schrieb Honza Král:
Hi axel,
If you are using python you can just use the python client
(elasticsearch-py) it will shield you from this. Just have a look at the
bulk and streaming_bulk helpers in the library.
Hope this helps,
Honza
Hi Pratik,
if you are using elasticsearch-py you can call any API by using:
es.transport.perform_request (0) directly, that will enable you to use
the carrot2 plugin:
es.transport.perform_request('POST',
'/test/test/_search_with_clustering', body={search_request: ...})
Hope this helps,
Honza
0
Hi Matt,
that is curious, could you please try to enably trace logging for
elasticsearch-py and look what exactly is being sent? My guess is that
there is something that needs to be escaped in python though what that
might be alludes me for the time being.
to tenable the logging just do:
import
Hi axel,
If you are using python you can just use the python client
(elasticsearch-py) it will shield you from this. Just have a look at the
bulk and streaming_bulk helpers in the library.
Hope this helps,
Honza
On Apr 11, 2014 7:52 PM, a...@mozilla.com wrote:
Thanks for the response.
My
Hello Josep,
you need to send the data as json, not as urlencoded, so:
values = { doc : { Parametros : PruebaParametros } }
postData = json.dumps(values)
req = urllib2.Request(query, postData)
response = urllib2.urlopen(req)
but unless you have very specific requirements I'd strongly suggest
Hi,
I am afraid the easiest solution here is to not use rivers but instead
do the loading yourself - using a dedicated python process to consume
the twitter stream, enriching the data and loading it into
elasticsearch using the stream_bulk helper in the official python
client (0).
0 -
Hi Paulyne,
unfortunately with elasticsearch you need to update your documents
one-by-one using the Update API, I am not too familiar with pyes, but
it exposes the api as a method it seems:
http://pyes.readthedocs.org/en/latest/references/pyes.es.html?highlight=update#pyes.es.ES.update
The document you will put in will be merged with the document in
elasticsearch.
Honza
On Mar 18, 2014 6:06 PM, PAULINE BONNEAU pauline.bonneau...@gmail.com
wrote:
I'm sorry but I have just one more problem. I don't know what I must put
in the *document* parameter in the update function :
Hello Kent,
you can always access the raw transport and send any request you wish
for the unsupported APIs:
from elasticsearch import Elasticsearch
es = Elasticsearch()
data, status = es.transport.perform_request('PUT', '/_river/',
body={'type': 'fs',})
Hope this helps,
Honza Kral
On Thu,
Hi Ivan,
the python client should accept both strings and unicode though I
highly recommend you always use unicode to avoid encoding issues. When
it is sent to the server we always encode the data in UTF-8 so both
these strings will end up encoded the same.
To see exactly what's going on you can
Hi Eric,
can you please have a look at which python you are using (which python) to
make sure you are using the system one and not one from virtualenv or
someplace else.
Also in python please do:
import sys
print sys.path
to verify if the directory
/usr/local/lib/python2.7/dist-packages
is
Hi,
the streaming_bulk function in elasticsearch-py is a helper that will
actually split the stream of documents into chunk and send them to
elasticsearch - it does not stream all documents to es as a single
request. It is impossible (due to the nature of bulk requests) for
elasticsearch to
for
the request to complete?
On Sunday, March 2, 2014 5:51:15 PM UTC, Honza Král wrote:
Hi,
the streaming_bulk function in elasticsearch-py is a helper that will
actually split the stream of documents into chunk and send them to
elasticsearch - it does not stream all documents to es as a single
directly to reduce
the client side activity?
And actually from a raw performance perspective just call client.bulk ?
Thanks
On Sunday, March 2, 2014 6:21:14 PM UTC, Honza Král wrote:
Well, you can just use any async http library to do it, but I wouldn't
recommend it since putting it all back
Hi Geza,
yes, this is correct behavior for elasticsearch now - since it wasn't
consistent before (sometimes you'd get back values, sometimes lists)
we changed it to be safe always (always return lists). This is not a
python-specific change, this is the difference in es output format.
If you're
: FALSE, supid:
01120041, ncontactphn: }
}
That's why I thought I was doing something wrong in Python.
Many thanks,
Geza
On Thursday, February 13, 2014 2:17:37 PM UTC, Honza Král wrote:
Hi Geza,
yes, this is correct behavior for elasticsearch now - since it wasn't
consistent before
Hi John,
elasticsearch-py supports python 2.6, I don't develop for it but I run
the tests under 2.6 and fix any bugs that occur. If you find any bugs,
I will be happy to fix them.
Thanks
On Mon, Jan 27, 2014 at 6:12 AM, John Stanford jxstanf...@gmail.com wrote:
Hi,
Just wanted to check on
37 matches
Mail list logo