users.html?highlight=query#creating-an-index
so that you can accelerate queries involving indexed columns.
--
Francesc Alted
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with app
les
pandas stores using Pytables
and embeds extra meta data in the attributes to enable deserialization to the
original pandas structure
On Jul 28, 2013, at 11:23 AM, Francesc Alted wrote:
> On 7/28/13 10:21 AM, David Reed wrote:
>> maybe I wasn't aware of this, but has
han the former. PyTables is an standalone
library, but Pandas uses it as another storage backend.
--
Francesc Alted
--
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application m
helper_classes.html#the-filters-class
This stems from the fact that in HDF5 a compressor is just like another
data filter.
--
Francesc Alted
--
See everything from the browser to the database with AppDynamics
Get e
the
compression library (in case you are using compression) to work much
more efficiently for the table.
HTH,
-- Francesc Alted
--
This SF.net email is sponsored by Windows:
Build for Windows Store.
http://p.sf.net/sf
gt;
> On Mon, Jun 10, 2013 at 4:37 PM, Francesc Alted <mailto:fal...@gmail.com>> wrote:
>
> Hi Ed,
>
> After fixing the issue, does performance has been enhanced? I'm
> the one
> who put the warning, so I'm curious on whether this actually h
; http://p.sf.net/sfu/windows-dev2dev
> ___
> Pytables-users mailing list
> Pytables-users@lists.sourceforge.net
> <mailto:Pytables-users@lists.sourceforge.net>
> https://lists.sourceforge.net/li
ks Tim,
>>
>> I adapted your example for my use case (I'm using the EArray class,
>> because I need to continuously update my database), and it works well.
>>
>> However, when I use this with my own data (but also creating the arrays
>> like you did), I
'm running into errors like "Could not wait on barrier".
>> It seems like the HDF library is spawing several threads.
>>
>> Any idea what's going wrong? Can I somehow avoid HDF5 multithreading at
>> runtime?
> Update:
>
> When setting max_blosc_thread
My congrats for the hard effort too. I am very pleased to see the PyTables
project so healty and well managed. Thanks to all the developers, most
specially Antonio and Anthony. You guys rock!
Francesc
El 02/06/2013 17:54, "Anthony Scopatz" va escriure:
> Congratulations All!
>
> This is a huge
s (with the introduction of netcdf4-python this was removed).
But it would be interesting to have it around again. It would be nice
of you can contribute the PR, together with some docs (a small tutorial
would be really great
http://groups.google.es/group/blosc
Enjoy Data!
--
Francesc Alted
--
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and
their applications
http://groups.google.es/group/blosc
Francesc Alted
--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for perfor
http://code.google.com/p/numexpr/
You can get the packages from PyPI as well:
http://pypi.python.org/pypi/numexpr
Share your experience
=
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
En
El 27/04/2013 9:27, "Antonio Valentino" va
escriure:
>
> Hi Francesc,
>
> Il 26/04/2013 14:11, Francesc Alted ha scritto:
> > Hi Antonio,
> >
> > Al 26/04/13 08:46, En/na Antonio Valentino ha escrit:
> >> Hi Francesc,
> >>
> >>
Hi,
I'm happy to announce the availability of Blosc 1.2.1 RC1. This is
mainly a fix for a problem with multithreading on Windows platforms.
The fix was important enough for deserving the version bump. Thanks a
lot to Christian Gohlke for proposing the fix: it works really well.
It exists cu
Hi Antonio,
Al 26/04/13 08:46, En/na Antonio Valentino ha escrit:
> Hi Francesc,
>
> Il 25/04/2013 23:06, Francesc Alted ha scritto:
>> Thanks. Will do!
> Thanks.
> For the record patches 0002 and 0003 close issue [75] and [77].
> Also numexpr 2.1 closes [91] a
Thanks. Will do!
El 25/04/2013 21:02, "Antonio Valentino" va
escriure:
> Hi Francesc,
>
> Il 14/04/2013 22:19, Francesc Alted ha scritto:
> >
> >Announcing Numexpr 2.1RC1
> >
> >
> >
On 4/22/13 8:11 AM, Antonio Valentino wrote:
> Hi Francesc,
>
> Il 21/04/2013 21:46, Francesc Alted ha scritto:
>> Hi,
>>
>> I'm happy to announce the availability of Blosc 1.2.0 RC1. It exists
>> currently just as a tag in the github repo
>> (https:/
sc mailing list at:
bl...@googlegroups.com
http://groups.google.es/group/blosc
--
Francesc Alted
--
Precog is a next-generation analytics platform capable of advanced
analytics on semi-structured data. The platform inc
Uploaded numexpr 2.1 RC2 with your suggestions. Thanks!
Francesc
Al 14/04/13 23:12, En/na Christoph Gohlke ha escrit:
> On 4/14/2013 1:19 PM, Francesc Alted wrote:
>>
>> Announcing Numexpr 2.1RC1
>>
>>
>
Al 14/04/13 23:12, En/na Christoph Gohlke ha escrit:
> Hello,
>
> Looks good. All tests pass here on Python 2.6-3.3, 32&64 bit, numpy
> 1.7.1, VML/MKL 11.0.3, Windows 8. PyTables 2.4 also tests OK against the rc.
>
> Two small issues:
>
> 1) numexpr-2.1-rc1.tar.gz is missing the file missing_posix_
Let us know of any bugs, suggestions, gripes, kudos, etc. you may
have.
Enjoy!
--
Francesc Alted
--
Precog is a next-generation analytics platform capable of advanced
analytics on semi-structured data. The platform includ
faster with AppDynamics
> Download AppDynamics Lite for free today:
> http://p.sf.net/sfu/appdyn_d2d_feb
> ___
> Pytables-users mailing list
> Pytables-users@lists.sourceforge.net
> <mailto:Pytables-users@lis
Hi Jon and Anthony,
I can confirm that this is a package error of PyTables in Anaconda CE 64
for Windows. We have filed a ticket in Anaconda for fixing this. Sorry
for the inconveniences.
Francesc Alted
On 2/15/13 4:56 PM, Anthony Scopatz wrote:
> Hi Jon,
>
> Unfortunately, I have
e length out of the first column
>
>
> /home/tejero/Local/Envs/test/lib/python2.7/site-packages/carray/ctable.pyc
> inread_meta_and_open(self)
> 40 # Initialize the cols by instatiating the carrays
>
> 41 for name, dir_in data['dirs'].items
r/include/python2.7 -c carray/carrayExtension.c -o
> build/temp.linux-x86_64-2.7/carray/carrayExtension.o -msse2" failed
> with exit status 4
>
>
>
> -á.
>
>
>
> On 7 December 2012 12:47, Francesc Alted <mailto:fal...@gmail.com>> wrote:
>
> On 1
m/PyTables/PyTables/issues/141#issuecomment-5018763
You already found the answer.
>
> * is/will it be possible to load PyTables carrays as in-memory carrays
> without decompression?
Actually, that has been my idea from the very beginning. The concept of
'flavor' for the re
cparams := cparams(clevel=5, shuffle=True)
rootdir := 'test'
[59 34 36 ..., 21 58 50]
In [30]: ca.set_nthreads(6)
Out[30]: 1
In [31]: timeit acd[:]
1 loops, best of 3: 317 ms per loop
In [32]: ca.set_nthreads(1)
Out[32]: 6
In [33]: timeit acd[:]
1 loops, best of 3: 361 ms per
ed/carray/blob/master/carray/carrayExtension.pyx#L651
It should not be too difficult to come up with an optimal implementation
using a chunk-based approach.
--
Francesc Alted
--
Monitor your physical, virtual and cl
__
> Pytables-users mailing list
> Pytables-users@lists.sourceforge.net
> <mailto:Pytables-users@lists.sourceforge.net>
> https://lists.sourceforge.net/lists/listinfo/pytables-users
>
>
On 11/2/12 5:19 PM, Ben Elliston wrote:
> On Fri, Nov 02, 2012 at 04:56:55PM -0400, Francesc Alted wrote:
>
>> Hmm, that's strange. Using lzo or zlib works for you?
> Well, it seems that switching compression algorithms could be a
> nightmare (or can I do this with ptrepac
ssing.Pool, like so:
>
> if __name__ == '__main__':
> pool = Pool(processes=2) # start 2 worker processes
> items = load_items ()
> pool.map (process_items, items)
>
Hmm, that's st
sc by
setting the MAX_BLOSC_THREADS parameter:
http://pytables.github.com/usersguide/parameter_files.html?#tables.parameters.MAX_BLOSC_THREADS
to 1.
HTH,
--
Francesc Alted
--
LogMeIn Central: Instant, anywhere, Re
the
file either using ptrepack PyTable's own tool or the HDF5 native tool
called h5repack.
HTH,
--
Francesc Alted
--
LogMeIn Central: Instant, anywhere, Remote PC access and management.
Stay in control, update sof
On 10/31/12 4:05 PM, Francesc Alted wrote:
> On 10/31/12 4:02 PM, Francesc Alted wrote:
>> On 10/31/12 10:12 AM, Andrea Gavana wrote:
>>> Hi Francesc & All,
>>>
>>> On 31 October 2012 14:13, Francesc Alted wrote:
>>>> On 10/31/12 4:30 A
On 10/31/12 4:02 PM, Francesc Alted wrote:
> On 10/31/12 10:12 AM, Andrea Gavana wrote:
>> Hi Francesc & All,
>>
>> On 31 October 2012 14:13, Francesc Alted wrote:
>>> On 10/31/12 4:30 AM, Andrea Gavana wrote:
>>>> Thank you for all your suggestions
On 10/31/12 10:12 AM, Andrea Gavana wrote:
> Hi Francesc & All,
>
> On 31 October 2012 14:13, Francesc Alted wrote:
>> On 10/31/12 4:30 AM, Andrea Gavana wrote:
>>> Thank you for all your suggestions. I managed to slightly modify the
>>> script you atta
ion time: 7.652
Hmm, in my modest Core2 laptop I'm getting this:
H5 file creation time: 1.294
Also, by using compression with zlib level 1:
H5 file creation time: 1.900
And using blosc level 5:
H5 file creation time: 0.244
HTH,
--
Francesc Alted
-
ries, but you can always use a regular query for that. I.e.
something along the lines:
np.fromiter((r for r in table if 'CLZ' in r['symbol']), dtype=table.dtype)
--
Francesc Alted
--
Everyone hate
On 10/27/12 12:21 PM, Antonio Valentino wrote:
> Hi Francesc,
> congratulations!
>
> Il 27/10/2012 13:16, Francesc Alted ha scritto:
>> Hi,
>>
>> You may be interested on my IPython notebooks and slides for the conference:
>>
>> http://pytables.org/downloa
only 45 minutes for the presentation, so I have not
been able to show the PyTables files samples that some of you kindly
send to me (but I'll keep them for the future, one never knows!).
--
Francesc Alted
--
WINDOWS
is about 308 mb compressed and 610 mb uncompressed.
>
> Jason
>
> On Sun, Oct 21, 2012 at 1:01 PM, Andy Wilson
> mailto:wilson.andre...@gmail.com>> wrote:
>
> On Sun, Oct 21, 2012 at 10:41 AM, Francesc Alted
> mailto:fal...@pytables.org>> wrote:
>
> >
me for how open source projects
> /should/ be run!
>
> I'd also really like to thank Antonio for driving new features into
> the code base!
>
> If only we were all on the same continent, we could have a PyTables
> birthday
> party or something...
>
> Be Wel
oking for files
that are not very large (< 1GB), and that use the Table object
significantly. A small description of the data included will be more
that welcome too!
Thanks!
--
Francesc Alted
--
Everyone hates slow
home page. Perhaps in next days.
Feedback welcome.!
--
Francesc Alted PGP KeyID: 0x61C8C11F
Scientific aplications developer
Public PGP key available:http://www.openlc.org/falted_at_openlc.asc
Key fingerprint = 1518 38FE 3A3D 8BE8 24A0 3E5B 1328 32CC 61C8 C11F
--
Fran
reach infinite
scalability is a bit audacious :) All the CArrays are datasets that
have to be saved internally by HDF5, and that requires quite a few of
resources to have track of them.
--
Francesc Alted
--
Got visib
iling list at:
bl...@googlegroups.com
http://groups.google.es/group/blosc
--
Francesc Alted
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape h
ng the MIT license, see LICENSES/BLOSC.txt for
details.
Mailing list
There is an official mailing list for Blosc at:
bl...@googlegroups.com
http://groups.google.es/group/blosc
--
Francesc Alted
--
Live Securi
t was true until version 0.5 where disk persistency was
introduced. Now, carray supports both in-memory and on-disk objects,
and they work exactly in the same way.
--
Francesc Alted
--
Got visibility?
Most devs has no ide
tp://pytables.github.com/usersguide/optimization.html
but I'm sure you already know this.
Frankly, if you want to enhance the speed of column retrieval, you are
going to need an object that is stored in column-order. In this sense,
you may want to experiment with the ctable
blosc
**Enjoy data!**
--
Francesc Alted
--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://ad.doubleclick.ne
le security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
>
>
> ___
> Pytables-users mailing list
> Pytables-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/pytables
ou created 30.000 groups in the same group?
Regarding the LRU cache, no, I don't think this is the problem, but
rather how HDF5 implements the 'inodes' (or whatever they call that).
This is a big issue in general (inodes in filesystems have similar
problems too), and what hurts perform
etected performance problems because of this. My experience is
that it is better to split the datasets in different groups, so that you
don't exceed, say, 1000 per each group. But I might be wrong...
--
Francesc Alted
sure to let you know when the video goes up. I think that we
definitely had some PyTables / HDF5 converts today.
I should also note that Antonio put out the v2.4-rc /during/ my
tutorial ;0.
Enjoy data!
Anthony
1. https://github.com/scopatz/scipy2012/tree/master/hdf5
--
Francesc Alted
-
ython3
> * handle str/unicode issues
> * full support to unicode HDF5 object names
> * start working an a good setup for 2to3 (needs some investigation)
> * ...
>
> Please let me know if you think there are other point that are
> important for python3 suppor
rtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50
y the 64 KB
limit. The problem is rather that having too many children hanging from
a single group affects quite negatively to performance (the same happens
with regular filesystems having directories with too many files).
--
Fra
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security
symbol table by timestamp.
The range of possibilities is really large, yes, but I'd try to avoid
sharding because it is normally harder to setup and manage, but you are
indeed free to try whatever approaches you feel they are best for you.
HTH,
--
Francesc Alted
--
Please note that
PyTables only can deal with HDF5 files. For HDF4 I'd rather use pyhdf:
http://pysclint.sourceforge.net/pyhdf/
--
Francesc Alted
--
Live Security Virtual Conference
Exclusive live event will c
FYI,
these attributes are a superset of the High Level HDF5 library:
http://www.hdfgroup.org/HDF5/hdf5_hl/
--
Francesc Alted
--
Live Security Virtual Conference
Exclusive live event will cover all the ways today's se
On 5/14/12 3:12 PM, Anthony Scopatz wrote:
On Mon, May 14, 2012 at 3:05 PM, Francesc Alted <mailto:fal...@pytables.org>> wrote:
[snip]
However, do not expect to use all your cores at full speed in this
cases, as the reductions in numexpr can only make use of one
thread
curity, mobile security and the latest in
malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
___
Pytables-users mailing list
Pytables-users@lists.sourceforge.net
<mailto:Pytables-users@lists.sourceforge.
of datasets and
> the available memory, PyTables could eventually decide whether to
> perform operations in memory or in kernel.
In-memory or in-kernel? You probably mean indexed or in-kernel, right?
Yes, that's certainly another nice place for further optimizations.
--
F
and asked a lot of questions, specially on the compression
(Blosc) and query features.
You can find the slides here:
http://www.pytables.org/docs/PUG-Austin-2012-v3.pdf
Cheers,
--
Francesc Alted
--
Live Security
On 4/30/12 12:08 PM, Alvaro Tejero Cantero wrote:
> Hi all,
>
> I created a table:
>
joins.createTable('/','spikes',{'t20k':pt.Int32Col(),'tetrode':pt.UInt8Col(),
'unit':pt.UInt8Col()},'Spike times')
> I populated it
>
joins.root.spikes.append(zip(np.arange(100),np.zeros(100), 3*np.
I hope it gets the appreciation and support it
> deserves!
Thanks, I also think it can be useful for some situations. But before
being more used, more work should be put in the range of operations
supported. Also, defining a C API and being able to use it straight
from C could help to spread pac
se confirm if you can reproduce the problem with blosc
level 9?
Thanks!
>
> -á.
>
>
> On Tue, Apr 24, 2012 at 04:39, Anthony Scopatz wrote:
>> On Mon, Apr 23, 2012 at 9:14 PM, Francesc Alted wrote:
>>> On 4/19/12 8:43 AM, Alvaro Tejero Cantero wrote:
>>>>
carray package is not as sophisticated as HDF5,
and it only blocks in the leading dimension. In this case, it is saying
that the block is a complete row. So this is the intended behaviour.
>
> The fact that both PyTable's CArray and carray.carray are named carray
> is a bit
nt16)
This smells like a bug, but I cannot reproduce it. Could you send an
self-contained example reproducing this behavior?
--
Francesc Alted
--
Live Security Virtual Conference
Exclusive live event will cover al
user 1.44 s, sys: 0.11 s, total: 1.55 s
Wall time: 1.53 s
So, the 'slow' times that you are seeing are a consequence of the
different data object creation and the internal data copies (for
building the final NumPy array). NumPy is much faster because all this
process is made
hat
> it says on the paragraph.
Yes, you are right. These small amendments to docs are best dealt if you
could submit a PR. With github this is easy to do, and it is also very
convenient for maintainers for keeping track o
rt from
> enveloping the whole program in a try-except-finally block?
>
> On Mon, Apr 2, 2012 at 9:48 PM, Francesc Alted wrote:
>> On 4/2/12 12:38 PM, Alvaro Tejero Cantero wrote:
>>> Hi,
>>>
>>> should PyTables flush on __exit__ ?
>>> https://github.
)._f_close() promises only "On
> nodes with data, it may be flushed to disk."
> https://github.com/PyTables/PyTables/blob/master/tables/node.py#L512
Yup, it does flush. The message should be more explicit on this.
--
Francesc Alted
-
On 3/31/12 2:13 AM, Antonio Valentino wrote:
> Hi Danid, hi Francesc,
>
> Il 31/03/2012 03:08, Francesc Alted ha scritto:
>> On 3/30/12 7:57 PM, Daπid wrote:
>>> Hello,
>>>
>>> I have several different kinds of data tables, absolutely independent
; KeyError: 'no such column: dough'
>
>
> Of course, my approach is not correct. Is there a valid way of doing it?
Right, subclassing IsDescription is not supported. Sorry, but I think
that the only way is to do the repeti
Uh? You mean 1 byte as a blocksize? This is certainly a bug. Could
you detail a bit more how you achieve this result? Providing an example
would be very useful.
>
> * A quick way to know how well your data will compress in PyTables if
&g
ail is sponsosred by:
> Try Windows Azure free for 90 days Click Here
> http://p.sf.net/sfu/sfd2d-msazure
> ___
> Pytables-users mailing list
> Pytables-users@lists.sourceforge.net
> http
at nobody bothered to implement
this. A patch for this would be more than welcome ;)
--
Francesc Alted
--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click He
t might well be that this
> optimizes the use of the memory bus (at some processing cost). But I
> am not aware of a numpy container for this.
Maybe a compressed array? That would lead to using less that 1 bit per
element in many situations. If you are interested in this, look into:
https://github
On 3/27/12 6:34 PM, Francesc Alted wrote:
> Another option that occurred to me recently is to save all your
> columns as unidimensional arrays (Array object, or, if you want
> compression, a CArray or EArray), and then use them as components of a
> boolean expression usi
columns that I would like to use as
>> query conditions.
>>
>> What do you recommend in this scenario?
>>
>> -á.
>>
>> [1]
>> http://pytables.github.com/usersguide/libref.html?highlight=vlstring#tables.Table.where
> ---
ng
the `tables.Expr` class. More on this later.
--
Francesc Alted
--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here
http://p.sf.net/sfu/sfd2d-msazure
__
> This SF email is sponsosred by:
> Try Windows Azure free for 90 days Click Here
> http://p.sf.net/sfu/sfd2d-msazure
> ___
> Pytables-users mailing list
> Pytables-users@list
t ``/usr/include``, library at ``/usr/local/lib``.
This is not grave, but do you have an explanation for this?
--
Francesc Alted
--
This SF email is sponsosred by:
Try Windows Azure free for 90 days Click Here
http://p.sf.ne
mmend in this scenario?
>>
>> -á.
>>
>> [1]
>> http://pytables.github.com/usersguide/libref.html?highlight=vlstring#tables.Table.where
> ------
> This SF email is sponsosred by:
> Try Windows Azure free for
On 3/22/12 1:59 PM, Francesc Alted wrote:
On 3/22/12 12:48 PM, sreeaurovindh viswanathan wrote:
But.. Can i get sort one column by descending and the other ascending.
say
if i have two columns and first i would like to sort the one in
ascending and then sort the second column based on the
gf1[::-1] # reverse sorted
prevval = r['f0']
gf1 = []
gf1.append(r['f1'])
if gf1:
gf1.sort()
print prevval, gf1[::-1] # reverse sorted
will print the next values:
f0-val0 [decreasing list of f1 v
order
2) Table.readSorted(): retrieves the complete sorted table as a
monolithic structured array
Both methods follow ascending order by default. Choose a step=-1 for
choosing a descending order.
-- Francesc Alted
--
This SF
ething simple I'm missing?
Inserting on PyTables objects is not supported. The reason is that they are
implemented on top of HDF5 datasets, that does not support this either. HDF5
is meant for dealing large datasets, and implementing insertions (or
deletions) is not an efficient operation (
Pytables-users mailing list
> Pytables-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/pytables-users
>
>
> --
> This SF email is sponsosred by:
> Try Windows Azure free for 90
ata in the big array object. Example:
>>
>> indexes = tbl.getWhereList(my_condition)
>> my_values = arr[indexes, 1:4]
>
> Ok, this is really useful (numpy.where on steroids?), because I should
> be able to reduce my original 64 columns to a few processed ones that
>
technology called indexing:
http://en.wikipedia.org/wiki/Database_index
> If it sounds like dumb to you, then let me offer to write an
> explanatory note for users in a similar case to mine, once I have
> sorted it out.
Hope things are clearer now.
Hasta luego,
-- Francesc Alted
-
eed to worry about such loops on the
> numpy arrays that the PyTables objects return.
Anthony is very right here. If you have very large amounts of data, you
absolutely need to get used to the iterator concept, as this allows you to run
into all your dataset
PyTables (although it
would be nice if we could have this implemented). In case you want this, then
storing (start, end) in other table/column/nested column, would solve the
problem.
-- Francesc Alted
pressing though.
> Is there a requirement to use the mpi version of the hd5 libraries for blosc
> to be multithreaded?
No, only the pthreads library is required. Why are you so sure that PyTables
is not using several thread
c.c overwrites
> it...
>
> On Thu, Mar 8, 2012 at 9:31 AM, Francesc Alted wrote:
> Excellent! I still have to figure out why your system does not support posix
> threads barriers properly, but most probably the patch is a good workaround
> for your case. Please feel free t
ERS) && ( (_POSIX_BARRIERS - 20012L) >= 0 &&
> _POSIX_BARRIERS != 200112L)
>
> On Mar 7, 2012, at 8:25 PM, Francesc Alted wrote:
>
>> On Mar 7, 2012, at 6:05 PM, Chris Kees wrote:
>>> On Wed, Mar 7, 2012 at 5:18 PM, Francesc Alted wrote:
>>> On
1 - 100 of 787 matches
Mail list logo