thx
I was debugging a colleague installation and I hadn't noticed that 2 was
64-bit.
all working now
-Original Message-
From: Francesc Alted [mailto:[email protected]]
Sent: Fri 7-Jan-2011 16:49
To: Discussion list for PyTables
Subject: Re: [Pytables-users] error in importing tables w
Hi
I get the following error when importing tables if I have lzo2.dll in my
path. Any ideas?
C:\Shared\Python26QV>path
PATH=c:\windows\system32
C:\Shared\Python26QV>python
Python 2.6.6 (r266:84297, Aug 24 2010, 18:46:32) [MSC v.1500 32 bit
(Intel)] on win32
Type "help", "copyright", "credits" or
So I now am keeping summaries in a separate table (3 actually, 2 VLArrays for
the attributes and dtypes and 1 Table for the numeric data). It is 30 times
faster to load for a small set of tables (59 mS compared to 1.7S), and 50 times
faster for a large set of table (714mS compared to 35.5S).
Is
Hi Francesc,
The challenge I face is getting it off my machine - I've ordered an encrypted
USB stick that I'll post to you in due course.
Currently, it appears that memory is consumed in the numpy loadtxt code (whilst
loading the csv) and not released until I do a file close() PyTables statemen
Hi Francesc
It's 1.2GByte so I'll have to burn a DVD and post it to you - first I need to
find out how to do that. I had a MemoryError loading the data in the first
place. Is there a way to check that the file format is good?
Basically the current file has about 1200 tables (float64, float32,
Hi
I'm using a negative number so I can search for the table using the getTables
function (as recommended in a previous post). Problems reproduces for 256, -256
and 2000.
Pretty sure the MemoryError is on the PyTables side - below is a standalone
script that reproduces it.
Apols about the con
Hi
I have loaded around 2200 table total of 1.1GB (I have a lot more to
load - csv size is 12GB). The load crash with a memory error:
Script run-time error
-
Traceback (most recent call last):
File "c:\QuantView\App\a2\ubs-packages\qa\scripting.py", line 48, in
executeScrip
D'oh. I think I missed it because I was looking for the words
"documentation" or "document". I also looked in HelpContents which was
about the wiki and not PyTables. The search didn't help me either.
The pytables side, once I figured how to use it was simple. I'm needing
to write a memory cache
Hi
I've found it useful to make the distinction between an index and an
offset in code and documentation as they are unambiguous. Typically C
programmers think in terms of offsets but they call them indexes, which
then means the audience has to think about what is meant.
It may be (as in I would
I hope I'm not missing something but I couldn't easily find the latest
pdf on the website at all. Maybe add a link to the front page or put it
in the FAQ do you think?
Thanks
David
PS I have PyTables running off a RAID 10 15krpm SAS drive and it flies
Visit our website at http://www.ubs.com
Th
Is this a bug in the documentation or have I missed something?
Cheers
David
Visit our website at http://www.ubs.com
This message contains confidential information and is intended only
for the individual named. If you are not the named addressee you
should not disseminate, distribute or copy t
Hi
Thx. It's working faster.
I'm opening a table like this:
import tables as pytables
tsdb = pytables.openFile("c:\\data\\tsdb.hd5", mode="a", NODE_CACHE_SLOTS=-1)
I then list all the attribute names thus:
def getAllAttributeNames(tsdb):
answer = set()
for node in tsdb:
if isin
Hi
I currently have 160 or so tables with attributes on. I wish to find the
all the tables with a certain set of attributes. This code is running a
little slow:
def getTables(tsdb, attributes):
answer = []
for node in tsdb:
if isinstance(node, pytables.Leaf):
matches =
Hi
I have some numeric data with missing points. Is there an equivalent of
numpy masked array in pytables or do I need to add mask field myself?
Thx
David
Visit our website at http://www.ubs.com
This message contains confidential information and is intended only
for the individual named. If y
14 matches
Mail list logo