Re: RDFStream to RDFConnection

2019-07-09 Thread Claude Warren
In my case one document is 2 million triples.  I set a default batch size
of 1000 (I think -- I don't have the code in front of me) but that is
overrideable as a constructor parameter.  More work to determine what the
proper default batch size is.

Internally I send the triples/quads to a dataset and after the batch size
is reached (or on finish()) send the dataset to the RDFConnection.  It is a
simplistic implementation but one that seems to work for my case.

Claude



On Tue, Jul 9, 2019 at 11:09 AM Andy Seaborne  wrote:

> Claude,
>
> How many triples does processing one XML document produce?  There seem
> to be several ways to get a batching/buffering effect including current
> code. e.g send the StreamRDF to a graph, then send the graph over the
> RDFConnection?
>
> One of the nuisances of HTTP is the need to have payloads that are
> correct for both request and response.  Otherwise streaming direct to
> the Fuseki server would be nice but it needs to allow for request-side
> abort. In fact, if you do a GSP requests and stream the body and the
> request has a parse error it will abort but forcing a parse error
> because the request side found a higher level condition that means it
> wants to stop (e.g. the user presses cancel) is pretty ugly.
>
> For SPARQL 1.2, I've suggested developing websockets protocol so that
> interactions with the server can be more sophisticated but that's a long
> way off yet.
>
>  Andy
>
> On 08/07/2019 17:56, Claude Warren wrote:
> > The case I was trying to solve was reading a largish XML document and
> > converting it to an RDF graph.  After a few iterations I ended up
> writing a
> > custom Sax parser that calls the RDFStream triple/quad methods.  But I
> > wanted a way to update a Fuseki server so RDFConnection seemed like the
> > natural choice.
> >
> > In some recent work for my employer I found that I like the RDFConneciton
> > as the same code can work against a local dataset or a remote one.
> >
> > Claude
> >
> > On Mon, Jul 8, 2019 at 4:34 PM ajs6f  wrote:
> >
> >> This "replay" buffer approach was the direction I first went in for TIM,
> >> until turning to MVCC (speaking of MVCC, that code is probably
> somewhere,
> >> since we don't squash when we merge). Looking back, one thing that
> helped
> >> me move on was the potential effect of very large transactions. But in a
> >> controlled situation like Claude's, that problem wouldn't arise.
> >>
> >> ajs6f
> >>
> >>> On Jul 8, 2019, at 11:07 AM, Andy Seaborne  wrote:
> >>>
> >>> Claude,
> >>>
> >>> Good timing!
> >>>
> >>> This is what RDF Delta does and for updates rather than just StreamRDF
> >> additions though its not to an RDFConnection - it's to a patch service.
> >>>
> >>> With hindsight, I wonder if that woudl have been better as
> >> BufferingDatasetGraph - a DSG that keeps changes and makes the view of
> the
> >> buffer and underlying DatasetGraph behave correctly (find* works and has
> >> the right cardinality of results). Its a bit fiddley to get it all right
> >> but once it works it is a building block that has a lot of re-usability.
> >>>
> >>> I came across this with the SHACL work for a BufferingGraph (with
> >> prefixes) give "abort" of transactions to simple graphs which aren't
> >> transactional.
> >>>
> >>> But it occurs in Fuseki with complex dataset set ups like rules.
> >>>
> >>> Andy
> >>>
> >>> On 08/07/2019 11:09, Claude Warren wrote:
>  I have written an RDFStream to RDFConnection with caching.  Basically,
> >> the
>  stream caches triples/quads until a limit is reached and then it
> writes
>  them to the RDFConnection.  At finish it writes any triples/quads in
> the
>  cache to the RDFConnection.
>  Internally I cache the stream in a dataset.  I write triples to the
> >> default
>  dataset and quads as appropriate.
>  I have a couple of questions:
>  1) In this arrangement what does the "base" tell me? I currently
> ignore
> >> it
>  and want to make sure I havn't missed something.
> >>>
> >>> The parser saw a BASE statement.
> >>>
> >>> Like PREFIX, in Turtle, it can happen mid-file (e.g. when files are
> >> concatenated).
> >>>
> >>> Its not necessary because the data stream should have resolved IRIs in
> >> it so base is used in a stream.
> >>>
>  2) I capture all the prefix calls in a PrefixMapping that is
> accessible
>  from the RDFConnectionStream class.  They are not passed into the
> >> dataset
>  in any way.  I didn't see any method to do so and don't really think
> it
> >> is
>  needed.  Does anyone see a problem with this?
>  3) Does anyone have a use for this class?  If so I am happy to
> >> contribute
>  it, though the next question becomes what module to put it in?
> Perhaps
> >> we
>  should have an extras package for RDFStream implementations?
>  Claude
> >>
> >>
> >
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: 

[jira] [Comment Edited] (JENA-1729) A minor initilization issue

2019-07-09 Thread ssz (JIRA)


[ 
https://issues.apache.org/jira/browse/JENA-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880243#comment-16880243
 ] 

ssz edited comment on JENA-1729 at 7/9/19 11:13 AM:


In my case (the project avicomp/ont-map) I use Jena initialization subsystem to 
load library graphs from system resources in order to have them as some kind of 
singleton: they are used widely in API, which is desired to be fast as 
possible. Maybe yes, loading graphs while initialization is not very good idea, 
and I have to think to change it somehow. But for me this is a minor issue, and 
here is mostly for the record; the appropriate way to use the api implies 
explicit calling `JenaSystem.init()`


was (Author: szz):
In my case (the project [avicomp/ont-map|https://github.com/avicomp/ont-map]) I 
use Jena initialization subsystem to load library graphs from system resources 
in order to have them as some kind of singleton: they are used widely in API, 
which is desired to be fast as possible. Maybe yes, loading graphs while 
initialization is not very good idea, and I have to think to change it somehow. 
But for me this is a minor issue, and here is mostly for the record; the 
appropriate way to use the api implies explicit calling `JenaSystem.init()`

> A minor initilization issue
> ---
>
> Key: JENA-1729
> URL: https://issues.apache.org/jira/browse/JENA-1729
> Project: Apache Jena
>  Issue Type: Bug
>  Components: Core
> Environment: java8(1.8.0_152), jena-arq:3.12.0
>Reporter: ssz
>Priority: Minor
> Fix For: Jena 3.12.0
>
>
> The following one-class program fails with assertion error:
>  
> {code:java}
> package xx.yy;
> import org.apache.jena.rdf.model.RDFNode;
> import org.apache.jena.rdf.model.ResourceFactory;
> import org.apache.jena.sys.JenaSubsystemLifecycle;
> import org.apache.jena.sys.JenaSystem;
> import org.apache.jena.vocabulary.RDF;
> public class InitTest implements JenaSubsystemLifecycle {
> @Override
> public void start() {
> if (JenaSystem.DEBUG_INIT)
> System.err.println("InitTEST -- start");
> assert RDF.type != null : "RDF#type is null => attempt to load a 
> graph here will fail";
> }
> @Override
> public void stop() {
> if (JenaSystem.DEBUG_INIT)
> System.err.println("InitTEST -- finish");
> }
> @Override
> public int level() {
> return 500;
> }
> public static void main(String... args) { // run VM option: -ea
> JenaSystem.DEBUG_INIT = true;
> //RDFNode r = ResourceFactory.createProperty("X"); // this works fine
> RDFNode r = ResourceFactory.createTypedLiteral("Y"); // this causes a 
> problem
> System.out.println(r);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (JENA-1729) A minor initilization issue

2019-07-09 Thread ssz (JIRA)


[ 
https://issues.apache.org/jira/browse/JENA-1729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880243#comment-16880243
 ] 

ssz edited comment on JENA-1729 at 7/9/19 11:12 AM:


In my case (the project [avicomp/ont-map|https://github.com/avicomp/ont-map]) I 
use Jena initialization subsystem to load library graphs from system resources 
in order to have them as some kind of singleton: they are used widely in API, 
which is desired to be fast as possible. Maybe yes, loading graphs while 
initialization is not very good idea, and I have to think to change it somehow. 
But for me this is a minor issue, and here is mostly for the record; the 
appropriate way to use the api implies explicit calling `JenaSystem.init()`


was (Author: szz):
In my case (the project https://github.com/avicomp/ont-map) I use Jena 
initialization subsystem to load library graphs from system resources in order 
to have them as some kind of singleton: they are used widely in API, which is 
desired to be fast as possible. Maybe yes, loading graphs while initialization 
is not very good idea, and I have to think to change it somehow. For me this is 
a minor issue, and here is mostly for the record; the appropriate way to use 
the api implies explicit calling `JenaSystem.init()`

> A minor initilization issue
> ---
>
> Key: JENA-1729
> URL: https://issues.apache.org/jira/browse/JENA-1729
> Project: Apache Jena
>  Issue Type: Bug
>  Components: Core
> Environment: java8(1.8.0_152), jena-arq:3.12.0
>Reporter: ssz
>Priority: Minor
> Fix For: Jena 3.12.0
>
>
> The following one-class program fails with assertion error:
>  
> {code:java}
> package xx.yy;
> import org.apache.jena.rdf.model.RDFNode;
> import org.apache.jena.rdf.model.ResourceFactory;
> import org.apache.jena.sys.JenaSubsystemLifecycle;
> import org.apache.jena.sys.JenaSystem;
> import org.apache.jena.vocabulary.RDF;
> public class InitTest implements JenaSubsystemLifecycle {
> @Override
> public void start() {
> if (JenaSystem.DEBUG_INIT)
> System.err.println("InitTEST -- start");
> assert RDF.type != null : "RDF#type is null => attempt to load a 
> graph here will fail";
> }
> @Override
> public void stop() {
> if (JenaSystem.DEBUG_INIT)
> System.err.println("InitTEST -- finish");
> }
> @Override
> public int level() {
> return 500;
> }
> public static void main(String... args) { // run VM option: -ea
> JenaSystem.DEBUG_INIT = true;
> //RDFNode r = ResourceFactory.createProperty("X"); // this works fine
> RDFNode r = ResourceFactory.createTypedLiteral("Y"); // this causes a 
> problem
> System.out.println(r);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (JENA-1728) Fuseki Assembler ignore ja:rulesFrom on Error

2019-07-09 Thread Andy Seaborne (JIRA)


[ 
https://issues.apache.org/jira/browse/JENA-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16881126#comment-16881126
 ] 

Andy Seaborne commented on JENA-1728:
-

One step here might be to have Fuseki configuration processing skip broken 
dataset configuration not throw a deep exception and refuse to start up.

Thoughts?


> Fuseki Assembler ignore ja:rulesFrom on Error
> -
>
> Key: JENA-1728
> URL: https://issues.apache.org/jira/browse/JENA-1728
> Project: Apache Jena
>  Issue Type: Improvement
>  Components: Fuseki
>Affects Versions: Jena 3.12.0
> Environment: GNU/Linux (Debian)
>Reporter: tdbrec
>Priority: Major
>  Labels: Assembly, fuseki2, inference, reasoner, ruleengine
>
> {code:java}
> :dataset a ja:InfModel ;
> ja:baseModel . ;
> ja:reasoner [
> ja:reasonerURL  ;
> ja:rulesFrom  ;
> ja:rulesFrom  ;
> ] .
> {code}
> If one of the files ja:rulesFrom contain syntax errors, Fuseki stops working. 
> It would be useful to have a way for "loading or ignoring" rules, for example 
> ja:rulesOrIgnoreFrom <...>
> My use case is that I'm accepting inference rules from users, and the only 
> way to update inference rules is by writing them to a file, append a new 
> ja:rulesFrom in the configuration, and reload Fuseki. Even though this 
> process is pretty cumbersome for updating rules, at least it's doable and I'm 
> OK with that. The major stopper is that there isn't a way to validate rules, 
> so when I ask Fuseki to load a broken file it will refuse to work until I fix 
> the file manually.
> A different option could be a new "ja:rulesFromDirectory" that will 
> automatically load all files inside a directory ignoring any file that raise 
> an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (JENA-1728) Fuseki Assembler ignore ja:rulesFrom on Error

2019-07-09 Thread Andy Seaborne (JIRA)


[ 
https://issues.apache.org/jira/browse/JENA-1728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16881125#comment-16881125
 ] 

Andy Seaborne commented on JENA-1728:
-

FYI: "the only way to update inference rules  .. reload Fuseki"

In the "main" server version of Fuseke, the app can add and delete services and 
configured datasets while the server is running.

> Fuseki Assembler ignore ja:rulesFrom on Error
> -
>
> Key: JENA-1728
> URL: https://issues.apache.org/jira/browse/JENA-1728
> Project: Apache Jena
>  Issue Type: Improvement
>  Components: Fuseki
>Affects Versions: Jena 3.12.0
> Environment: GNU/Linux (Debian)
>Reporter: tdbrec
>Priority: Major
>  Labels: Assembly, fuseki2, inference, reasoner, ruleengine
>
> {code:java}
> :dataset a ja:InfModel ;
> ja:baseModel . ;
> ja:reasoner [
> ja:reasonerURL  ;
> ja:rulesFrom  ;
> ja:rulesFrom  ;
> ] .
> {code}
> If one of the files ja:rulesFrom contain syntax errors, Fuseki stops working. 
> It would be useful to have a way for "loading or ignoring" rules, for example 
> ja:rulesOrIgnoreFrom <...>
> My use case is that I'm accepting inference rules from users, and the only 
> way to update inference rules is by writing them to a file, append a new 
> ja:rulesFrom in the configuration, and reload Fuseki. Even though this 
> process is pretty cumbersome for updating rules, at least it's doable and I'm 
> OK with that. The major stopper is that there isn't a way to validate rules, 
> so when I ask Fuseki to load a broken file it will refuse to work until I fix 
> the file manually.
> A different option could be a new "ja:rulesFromDirectory" that will 
> automatically load all files inside a directory ignoring any file that raise 
> an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: SHACL

2019-07-09 Thread Andy Seaborne
I'd like to offer it to Jena or the community wants and will engage 
with.  The Jena development cycle isn't a problem here.


https://github.com/afs/morph

The engine works on Graph/Triple/Node - mainly for efficiency reasons 
(not entirely proven).


I've ended up with a small library to get a little part of the 
navigation style that the Model API has. What I want to avoid is 
creating small intermediate Java objects - if that's on the inside of 
processing loops, it seems to impact performance (CPU cache issues, GC 
has to do some work even if it is only in the immediate generation etc). 
I want Java value types (data classes and sealed types ; Project 
Vahalla).  http://cr.openjdk.java.net/~briangoetz/amber/datum.html



A library isn't ideal - when in fine-grained work, operations like "get 
the object, given subject and predicate" can happen a lot.  Some storage 
does not keep triple objects and creating a triple to return S/P/O just 
to pull out the O because S/P are fixed does seem a little crazy.


So the simplicity of all access being find(s,p,o) does have a cost which 
normally isn't important but for graph algorithms (in a general sense) 
every little cost can add up.


And that also implies streams are not always the way to go.  Creating a 
stream is few java objects and when it is "get a single value" really 
can kick-in.  Think Graph.contains. Does your experience with CommonsRDF 
give any insight here?


I do think we should add stream(s,p,o) to Graph.

Maybe also some (a few - not go over the top) accessors like "getSP -> 
Object".


This is stil not Model - that has a lot more like the polymorphism.

Andy

On 08/07/2019 16:28, Aaron Coburn wrote:

Thank, Andy, I will likely have some data+shape resources in the coming
weeks/months that I would like to test. Are there plans to add this code to
Jena itself, or do you anticipate that it will be part of a separate
repository?

Best,
Aaron

On Mon, 8 Jul 2019 at 10:58, Andy Seaborne  wrote:


I've got a SHACL validation engine working - it covers both the core and
sparql constraints of the W3C Specification.

If anyone has data+shapes, I'll happily use them to run further tests.

Status: passes the WG test suite except for some in
std/sparql/pre-binding/. Optional $shapesGraph and $currentShape are not
supported (more below) and the "unsupported" tests in pre-binding (some
of the rules seem overly restrictive) aren't run.

AKA All valid shapes work, invalid shapes are "what you can get away
with".  This is for future flexibility :-)

None of the non-spec SHACL-AF is covered.

API:

As well as the operations to validate a graph using a given shapes graph
(command line or API), there is also a graph that rejects non-conforming
data in a graph transaction.

Datasets:

SHACL is defined to validate a single graph. To extend to validation of
a dataset, just one set for shapes for all graphs seems a little
restrictive.

Some ideas -- https://afs.github.io/shacl-datasets.html

$shapesGraph is for the case where data and shapes are in one dataset -
I'm not sure that's a very good idea because it imposes conditions on
extending SHACL to data datasets.

Opportunities:

There are possibilities for further work for deeper integration into
dataset update path:

* Parallel execution - some shapes can be applied to an update stream
without reference to the data so can be done on a separate thread
outside the transaction.

* Restricting the validation work needed - for some shapes
(not all, but it is a static analysis of shapes to determine which)
the updates can be tracked to only validate changes. There are ways to
write shapes that (1) apply globally to the data or (2) have indirect
data changes where just looking at the data does not tell you if a shape
might now report violations.

There is some prototyping done but I got sidetracked by shacl-datasets.html

  Andy





Re: RDFStream to RDFConnection

2019-07-09 Thread Andy Seaborne

Claude,

How many triples does processing one XML document produce?  There seem 
to be several ways to get a batching/buffering effect including current 
code. e.g send the StreamRDF to a graph, then send the graph over the 
RDFConnection?


One of the nuisances of HTTP is the need to have payloads that are 
correct for both request and response.  Otherwise streaming direct to 
the Fuseki server would be nice but it needs to allow for request-side 
abort. In fact, if you do a GSP requests and stream the body and the 
request has a parse error it will abort but forcing a parse error 
because the request side found a higher level condition that means it 
wants to stop (e.g. the user presses cancel) is pretty ugly.


For SPARQL 1.2, I've suggested developing websockets protocol so that 
interactions with the server can be more sophisticated but that's a long 
way off yet.


Andy

On 08/07/2019 17:56, Claude Warren wrote:

The case I was trying to solve was reading a largish XML document and
converting it to an RDF graph.  After a few iterations I ended up writing a
custom Sax parser that calls the RDFStream triple/quad methods.  But I
wanted a way to update a Fuseki server so RDFConnection seemed like the
natural choice.

In some recent work for my employer I found that I like the RDFConneciton
as the same code can work against a local dataset or a remote one.

Claude

On Mon, Jul 8, 2019 at 4:34 PM ajs6f  wrote:


This "replay" buffer approach was the direction I first went in for TIM,
until turning to MVCC (speaking of MVCC, that code is probably somewhere,
since we don't squash when we merge). Looking back, one thing that helped
me move on was the potential effect of very large transactions. But in a
controlled situation like Claude's, that problem wouldn't arise.

ajs6f


On Jul 8, 2019, at 11:07 AM, Andy Seaborne  wrote:

Claude,

Good timing!

This is what RDF Delta does and for updates rather than just StreamRDF

additions though its not to an RDFConnection - it's to a patch service.


With hindsight, I wonder if that woudl have been better as

BufferingDatasetGraph - a DSG that keeps changes and makes the view of the
buffer and underlying DatasetGraph behave correctly (find* works and has
the right cardinality of results). Its a bit fiddley to get it all right
but once it works it is a building block that has a lot of re-usability.


I came across this with the SHACL work for a BufferingGraph (with

prefixes) give "abort" of transactions to simple graphs which aren't
transactional.


But it occurs in Fuseki with complex dataset set ups like rules.

Andy

On 08/07/2019 11:09, Claude Warren wrote:

I have written an RDFStream to RDFConnection with caching.  Basically,

the

stream caches triples/quads until a limit is reached and then it writes
them to the RDFConnection.  At finish it writes any triples/quads in the
cache to the RDFConnection.
Internally I cache the stream in a dataset.  I write triples to the

default

dataset and quads as appropriate.
I have a couple of questions:
1) In this arrangement what does the "base" tell me? I currently ignore

it

and want to make sure I havn't missed something.


The parser saw a BASE statement.

Like PREFIX, in Turtle, it can happen mid-file (e.g. when files are

concatenated).


Its not necessary because the data stream should have resolved IRIs in

it so base is used in a stream.



2) I capture all the prefix calls in a PrefixMapping that is accessible
from the RDFConnectionStream class.  They are not passed into the

dataset

in any way.  I didn't see any method to do so and don't really think it

is

needed.  Does anyone see a problem with this?
3) Does anyone have a use for this class?  If so I am happy to

contribute

it, though the next question becomes what module to put it in?  Perhaps

we

should have an extras package for RDFStream implementations?
Claude







Re: RDFStream to RDFConnection

2019-07-09 Thread Claude Warren
So, the question is should I go ahead and create a library of StreamRDF
implementations in the extras section?  I could see one to do serialization
over Kafka (or other queue implementations)?

On Mon, Jul 8, 2019 at 5:56 PM Claude Warren  wrote:

> The case I was trying to solve was reading a largish XML document and
> converting it to an RDF graph.  After a few iterations I ended up writing a
> custom Sax parser that calls the RDFStream triple/quad methods.  But I
> wanted a way to update a Fuseki server so RDFConnection seemed like the
> natural choice.
>
> In some recent work for my employer I found that I like the RDFConneciton
> as the same code can work against a local dataset or a remote one.
>
> Claude
>
> On Mon, Jul 8, 2019 at 4:34 PM ajs6f  wrote:
>
>> This "replay" buffer approach was the direction I first went in for TIM,
>> until turning to MVCC (speaking of MVCC, that code is probably somewhere,
>> since we don't squash when we merge). Looking back, one thing that helped
>> me move on was the potential effect of very large transactions. But in a
>> controlled situation like Claude's, that problem wouldn't arise.
>>
>> ajs6f
>>
>> > On Jul 8, 2019, at 11:07 AM, Andy Seaborne  wrote:
>> >
>> > Claude,
>> >
>> > Good timing!
>> >
>> > This is what RDF Delta does and for updates rather than just StreamRDF
>> additions though its not to an RDFConnection - it's to a patch service.
>> >
>> > With hindsight, I wonder if that woudl have been better as
>> BufferingDatasetGraph - a DSG that keeps changes and makes the view of the
>> buffer and underlying DatasetGraph behave correctly (find* works and has
>> the right cardinality of results). Its a bit fiddley to get it all right
>> but once it works it is a building block that has a lot of re-usability.
>> >
>> > I came across this with the SHACL work for a BufferingGraph (with
>> prefixes) give "abort" of transactions to simple graphs which aren't
>> transactional.
>> >
>> > But it occurs in Fuseki with complex dataset set ups like rules.
>> >
>> >Andy
>> >
>> > On 08/07/2019 11:09, Claude Warren wrote:
>> >> I have written an RDFStream to RDFConnection with caching.  Basically,
>> the
>> >> stream caches triples/quads until a limit is reached and then it writes
>> >> them to the RDFConnection.  At finish it writes any triples/quads in
>> the
>> >> cache to the RDFConnection.
>> >> Internally I cache the stream in a dataset.  I write triples to the
>> default
>> >> dataset and quads as appropriate.
>> >> I have a couple of questions:
>> >> 1) In this arrangement what does the "base" tell me? I currently
>> ignore it
>> >> and want to make sure I havn't missed something.
>> >
>> > The parser saw a BASE statement.
>> >
>> > Like PREFIX, in Turtle, it can happen mid-file (e.g. when files are
>> concatenated).
>> >
>> > Its not necessary because the data stream should have resolved IRIs in
>> it so base is used in a stream.
>> >
>> >> 2) I capture all the prefix calls in a PrefixMapping that is accessible
>> >> from the RDFConnectionStream class.  They are not passed into the
>> dataset
>> >> in any way.  I didn't see any method to do so and don't really think
>> it is
>> >> needed.  Does anyone see a problem with this?
>> >> 3) Does anyone have a use for this class?  If so I am happy to
>> contribute
>> >> it, though the next question becomes what module to put it in?
>> Perhaps we
>> >> should have an extras package for RDFStream implementations?
>> >> Claude
>>
>>
>
> --
> I like: Like Like - The likeliest place on the web
> 
> LinkedIn: http://www.linkedin.com/in/claudewarren
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren