Thank you from a happy user :)))
On Sat, Jun 1, 2013 at 8:33 AM, Antonio Valentino <
antonio.valent...@tiscali.it> wrote:
> ===
> Announcing PyTables 3.0.0
> ===
>
> We are happy to announce PyTables 3.0.0.
>
> PyTables 3.0.0 comes after about 5
uests welcome!
>
> https://github.com/PyTables/PyTables/issues/225
>
> Be Well
> Anthony
>
>
> On Wed, Apr 10, 2013 at 1:02 PM, Dr. Louis Wicker
> wrote:
>
>> I am also interested in the this capability, if it exists in some way...
>>
>> Lou
>>
Hi,
Is there a way that I could have the ability of readWhere (i.e., specify
condition, and fast result) but also using a CSIndex so that the rows come
sorted in a particular order?
I checked readSorted() but it is iterative and does not allow to specify a
condition.
Julio
--
Thanks again :)
On Wed, Apr 10, 2013 at 1:53 PM, Anthony Scopatz wrote:
> On Wed, Apr 10, 2013 at 11:40 AM, Julio Trevisan
> wrote:
>
>> Hi Anthony
>>
>> Thanks again.* *If it is a problem related to floating-point precision,
>> I might use an Int64Co
Hi Anthony
Thanks again.* *If it is a problem related to floating-point precision, I
might use an Int64Col instead, since I don't need the timestamp miliseconds.
Julio
On Wed, Apr 10, 2013 at 1:17 PM, Anthony Scopatz wrote:
> On Wed, Apr 10, 2013 at 7:44 AM, Julio Trevisan
Hi,
I am using a Time64Col called "timestamp" in a condition, and I noticed
that the condition does not work (i.e., no rows are selected) if I write
something as:
for row in node.where("timestamp == %f" % t):
...
However, I had this idea of dividing the values by, say 1000, and it does
work:
: took 0.073058 seconds to do everything
else
(database)DEBUG:BOVESPA.VISTA.PETR4: took 0.24 seconds to ZIP
On Fri, Mar 22, 2013 at 12:35 PM, Anthony Scopatz wrote:
> On Fri, Mar 22, 2013 at 7:11 AM, Julio Trevisan
> wrote:
>
>> Hi,
>>
>> I just joined this list,
Hi,
I just joined this list, I am using PyTables for my project and it works
great and fast.
I am just trying to optimize some parts of the program and I noticed that
zipping the tuples to get one tuple per column takes much longer than
reading the data itself. The thing is that readWhere() retur