On 4/26/12 4:04 AM, Alvaro Tejero Cantero wrote:
> Sure, but this is not the spirit of a compressor adapted to the blocking
> technique (in the sense of [1]). For a compressor that works with
> blocks, you need to add some metainformation for each block, and that
> takes space. A ratio of 350 to
On Thu, Apr 26, 2012 at 04:07, Francesc Alted wrote:
> On 4/25/12 7:05 AM, Alvaro Tejero Cantero wrote:
>> Hi, a minor update on this thread
>>
* a bool array of 10**8 elements with True in two separate slices of
length 10**6 each compresses by ~350. Using .wheretrue to obtain
indic
On 4/25/12 7:05 AM, Alvaro Tejero Cantero wrote:
> Hi, a minor update on this thread
>
>>> * a bool array of 10**8 elements with True in two separate slices of
>>> length 10**6 each compresses by ~350. Using .wheretrue to obtain
>>> indices is faster by a factor of 2 to 3 than np.nonzero(normal num
Hi, a minor update on this thread
>> * a bool array of 10**8 elements with True in two separate slices of
>> length 10**6 each compresses by ~350. Using .wheretrue to obtain
>> indices is faster by a factor of 2 to 3 than np.nonzero(normal numpy
>> array). The resulting filesize is 248kb, still fa
On 3/29/12 10:49 AM, Alvaro Tejero Cantero wrote:
>>>What is your advice on how to monitor the use of
>>> memory? (I need this until PyTables is second skin).
>> top?
> I had so far used it only in a very rudimentary way and found the man
> page quite intimidating. Would you care to share your
>> What is your advice on how to monitor the use of
>> memory? (I need this until PyTables is second skin).
>
> top?
I had so far used it only in a very rudimentary way and found the man
page quite intimidating. Would you care to share your tips for this
particular scenario? (e.g. how do you kee
On 3/28/12 10:15 AM, Alvaro Tejero Cantero wrote:
> That is a perfectly fine solution for me, as long as the arrays aren't
> copied in memory for the query.
No, the arrays are not copied in memory. They are just read from disk
block-by-block and then the output is directed to the iterator, or an
That is a perfectly fine solution for me, as long as the arrays aren't
copied in memory for the query.
Thank you!
Thinking that your proposed solution uses iterables to avoid it I tried
boolcond = pt.Expr('(exp(a)<0.9)&(a*b>0.7)|(b*sin(a)<0.1)')
indices = [i for i,v in boolcond if v]
(...) TypeE
On 3/27/12 6:34 PM, Francesc Alted wrote:
> Another option that occurred to me recently is to save all your
> columns as unidimensional arrays (Array object, or, if you want
> compression, a CArray or EArray), and then use them as components of a
> boolean expression using the class `tables.Expr
Another option that occurred to me recently is to save all your columns
as unidimensional arrays (Array object, or, if you want compression, a
CArray or EArray), and then use them as components of a boolean
expression using the class `tables.Expr`. For example, if a, b and c
are unidimensional
On 3/27/12 2:20 AM, Alvaro Tejero Cantero wrote:
>>> (but how to grow it in columns without deleting&recreating?)
>> You can't (at least on cheap way). Maybe you may want to create
>> additional tables and grouping them in terms of the columns you are
>> going to need for your queries.
> Sorry
>> (but how to grow it in columns without deleting& recreating?)
>
> You can't (at least on cheap way). Maybe you may want to create
> additional tables and grouping them in terms of the columns you are
> going to need for your queries.
Sorry, it is not clear to me: create new tables and (groupi
Hi Alvaro,
On 3/26/12 12:43 PM, Alvaro Tejero Cantero wrote:
> Would it be an option to have
>
> * raw data on one table
> * all imaginable columns used for query conditions in another table
Yes, that sounds like a good solution to me.
> (but how to grow it in columns without deleting& recreati
Would it be an option to have
* raw data on one table
* all imaginable columns used for query conditions in another table
(but how to grow it in columns without deleting & recreating?)
and fetch indexes for the first based on .whereList(condition) of the second?
Are there alternatives?
-á.
O
Hi there,
I am following advice by Anthony and giving a go at representing
different sensors in my dataset as columns in a Table, or in several
Tables. This is about in-kernel queries.
The documentation of condvars in Table.where [1] says "condvars should
consist of identifier-like strings pointi
15 matches
Mail list logo