On 09/29/2015 10:08 PM, Robert Hanson wrote:
> I have an idea. Let's call the "associative array" a "map" just so we
> don't have so many "array" words.
>
I would rather prefer "hash" because the term is used for it in other 
languages like Perl and I am used to it. I used "associative array" to 
adapt to the name used in the description of the new system.

> On Tue, Sep 29, 2015 at 12:44 PM, Rolf Huehne <rhue...@fli-leibniz.de
> <mailto:rhue...@fli-leibniz.de>> wrote:
>
>     On 09/28/2015 06:46 PM, Robert Hanson wrote:
> The multi-level combinations C) to F) have in common that the key
>
>     restrictions from 1) act on the same level as the keys of the phrase
>     from 3). In contrast in the multi-level combinations G) and H) the
>     levels are mixed. This doesn't seem to be very consistent to me. At
>     least it seems to be counterintuitive.
>
>
>   G and H simply look one level deeper into a map
>
>     Even if you don't agree, there are two missing cases where the levels
>     are not mixed. Assuming a "consistent" behaviour they would look
>     like this:
>
>     ---- Example Code ------------
>     snpInfo = {"rs1229984": [resno: 48,
>                                from: "R",
>                                to: "H"],
>                  "rs1041969": [resno: 57,
>                                from: "N",
>                                to: "K"]
>                  };
>
>     print "=== Case 1 ====";
>     print snpInfo.select("from,to WHEREIN to='H'").format("JSON");
>
>
> The keys are always for the top-level map. Here you are trying to bypass
> a map level with the keys.That's not in the schema and should not be
> intuitive. The closest you can come is
>
> print snpInfo.select("* WHEREIN to='H' ")
>
> and just accept that resno is still there.
>
>     print "=== Case 2 ====";
>     print snpInfo.select("(resno) WHEREIN to='H'");
>
>
> Same here -- You are trying to bypass a map level with (resno). No one
> has suggested doing that.
>
>
Bob, it seems that we have a different perception of hashes in different 
levels.

For me the first level hash usually is just like a standard array. So it 
doesn't contain the actual data directly but in second-level hashes. The 
only crucial difference to a standard array is, that the index of an 
array element is foreseable (if the hash keys are designed properly). 
This enables to avoid duplicates automatically without any performance 
penalty (increasing with array size) while the array is built or 
expanded. And it also enables direct access to any dataset if the key 
properties are already known. So changing the data structure to a 
standard array is not an option for me.

So for me it is not "bypassing a map level". It is just the same as if 
the first level would be a standard array.

A solution could be to implement an extra keyword by which the user can 
define if the first hash found should be treated like a hash or a 
standard array.

Regards,
Rolf

-- 

Rolf Huehne
Postdoc

Leibniz Institute for Age Research - Fritz Lipmann Institute (FLI)
Beutenbergstrasse 11
07745 Jena, Germany

Phone:   +49 3641 65 6205
Fax:     +49 3641 65 6210
E-Mail:  rhue...@fli-leibniz.de
Website: http://www.fli-leibniz.de

           Scientific Director: Prof. Dr. K. Lenhard Rudolph
        Head of Administration: Dr. Daniele Barthel
Chairman of Board of Trustees: Dennys Klein

VAT No: DE 153 925 464
Register of Associations: No. 230296, Amtsgericht Jena
Tax Number: 162/141/08228


------------------------------------------------------------------------------
_______________________________________________
Jmol-users mailing list
Jmol-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jmol-users

Reply via email to