Re: [MarkLogic Dev General] fn:current-dateTime()

2017-01-05 Thread Wayne Feick
If you're trying to figure out how long things take to run, you could 
use xdmp:elapsed-time().


   http://docs.marklogic.com/xdmp:elapsed-time

Be aware that lazy/delayed evaluation can result in confusing results. 
You can avoid that with xdmp:eager().


   http://docs.marklogic.com/xdmp:eager



On 01/05/2017 09:44 AM, David Lee wrote:


This is correct functionality.

Within a single XQuery statement fn:current-dateTime will return the 
same value. This is defined in the W3C XQuery specifications.  The 
specifications do not cover the concept of multiple 'statements' or 
'transactions' or spawn/eval/invoke etc -- those are higher level 
vendor specific concepts.  The behavior you are seeing is exactly what 
should be expected.


Within any 'statement' fn:current-dateTime() is guaranteed to be the 
same;  In different statements it may be different (depending on the 
precision of the time format and how fast the statements are executed.)


A minimal case to try is this:

- Single Xquery Statement.

fn:current-dateTime(),

xdmp:sleep(1000),

fn:current-dateTime()

->>> This will produce the same results for both

vs

 Multiple XQuery Statements (note ";" vs "," )

fn:current-dateTime();

xdmp:sleep(1000);

fn:current-dateTime();

-> The second time will be approx. 1 second greater than the first.

*From:*general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of *sweet frd

*Sent:* Thursday, January 05, 2017 2:00 AM
*To:* MarkLogic Developer Discussion 
*Subject:* Re: [MarkLogic Dev General] fn:current-dateTime()

Hi All,

Please find the below code snippet in which i have created an invoke 
and module xquery with same fn:current-dateTime() logging and 
returning result.


As a result I can able to see the timestamp is different :

   1. when an other xquery is called using invoke

   2. when an other xquery is called using spawn

   3. when using spawn function for both local and other module 
xquery function


*Sample :*

import module namespace check = "http://marklogic.com/check"; at 
"/a/common/check_now.xqy";


declare function local:dateTime(){

(

fn:current-dateTime(),

xdmp:log(("local fun",fn:current-dateTime()))

)

};

*let $check := fn:current-dateTime()*

let $time := cts:uri-match("*")

let $time := cts:search(doc(),())

let $check1 := xdmp:invoke("/a/common/now.xqy")

let $check11 := xdmp:spawn("/a/common/now.xqy")

*let $check2 := fn:current-dateTime()*

let $check3 := local:dateTime()

let $check4 := check:dateTime()

let $check5 := xdmp:spawn-function(function(){local:dateTime()})

let $check6 := xdmp:spawn-function(function(){check:dateTime()})

return 
("Result",$check,"aaa",$check1,"aaa11",$check11,"bbb",$check2,"ccc",$check3,"ddd",$check4,"spawn 
eee",$check5, "fff",$check6)


Verify the result and marklogic logs..

Regards,

N. Sumathi.

On Thu, Nov 17, 2016 at 6:53 AM, Florent Georges > wrote:


Hi,

The current date and time is the same through the evaluation of the
entire query.  So if "submodule.xay" is a library module, imported in
the main module, that will give you the exact same value.

If it is not a library module, and you invoke, spawn, schedule a task,
eval or anything else, it depends on that "anything else"...

Regards,

--
Florent Georges
http://fgeorges.org/
http://h2o.consulting/ - New website!



On 17 November 2016 at 12:46, sweet frd wrote:
> Hi All,
>
> I have a module somemodule.xqy which has the following line for
logic
>
> fn:current-dateTime()
>
>
> Will there be a difference in output for the below scenarios
>
> (a) Invoke the above xquery (somemodule.xqy) in main module
>
> (b) Directly invoke the function fn:current-dateTime() in the
main module
>
>
> Regards,
> N. Sumathi.
>

> ___
> General mailing list
> General@developer.marklogic.com

> Manage your subscription at:
> http://developer.marklogic.com/mailman/listinfo/general
>
___
General mailing list
General@developer.marklogic.com

Manage your subscription at:
http://developer.marklogic.com/mailman/listinfo/general



___
General mailing list
General@developer.marklogic.com
Manage your subscription at:
http://developer.marklogic.com/mailman/listinfo/general


___
General mailing list
General@developer.marklogic.com
Manage your subscription at: 
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] dls version major vs minor

2016-02-10 Thread Wayne Feick
You can maintain other metadata in document properties, but DLS's notion
of versioning is linear (i.e. once you've transitioned 1.0->1.1->2.0,
you can't go back and create 1.2). I'm not sure whether that's inline
with what you're thinking.

Wayne.


On 02/10/2016 04:43 AM, vikas.sin...@cognizant.com wrote:
>
> Hi ,
>
>  
>
> I am planning to use dls library for versioning  the document .
>
> Is it possible to  create major and minor version of document.
>
>  
>
> Example: Current version of document is 1.0
>
> If I do minor version of document it should create 1.1
>
> If I do major version of document it should create 2.0
>
>  
>
> Regards,
>
> Vikas Singh
>
>  
>
> This e-mail and any files transmitted with it are for the sole use of
> the intended recipient(s) and may contain confidential and privileged
> information. If you are not the intended recipient(s), please reply to
> the sender and destroy all copies of the original message. Any
> unauthorized review, use, disclosure, dissemination, forwarding,
> printing or copying of this email, and/or any action taken in reliance
> on the contents of this e-mail is strictly prohibited and may be
> unlawful. Where permitted by applicable law, this e-mail and other
> e-mail communications sent to and from Cognizant e-mail addresses may
> be monitored.
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> Manage your subscription at: 
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information
is intended solely for the use of the individual to whom it is addressed. Any
review, disclosure, copying, distribution, or use of this e-mail communication
by others is strictly prohibited. If you are not the intended recipient, please
notify us immediately by returning this message to the sender and delete all
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
Manage your subscription at: 
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Marklogic hosting options?

2016-01-07 Thread Wayne Feick
 1. I wouldn't expect a private data center cluster to perform well with
remote AWS storage. You wouldn't be able to use EBS, so you'd either
need to use S3 (with the drawback that you can't journal), or NFS.
Today, NFS is only possible with EC2 so you'd have instance costs
anyway. Soon they will be offering a managed NFS service, but I'm
not sure if their managed service will be accessible from elsewhere.
 2. A 3-node MarkLogic cluster running against a single SAN is a normal
configuration. The SAN should provide high performance, and be
configured for high-availability as much as possible to reduce the
risk of a single point of failure.
 3. We expect high-bandwidth and low-latency between hosts in a cluster.
In the case of AWS availability zones, best practice is to spread a
cluster across 3 zones, and Amazon places those zones in different
physical locations. You could build multi-site private
infrastructure similar to what Amazon has built, but when you start
talking about spreading those sites across different states I would
be concerned about achieving sufficiently high-bandwidth and
low-latency between them. I would not proceed down that path unless
you are a black-belt who is experienced at running with knives, and
even then I would think long and hard about alternatives. An
alternative architecture would be to have a separate cluster at each
location, and use database replication to mirror state to the other
clusters.




On 01/07/2016 01:58 PM, Dennis Garlick wrote:
> Hi,
>
> Without going through all of the time and expense to test various
> options, I’m wondering what are the possible drawbacks (or even
> feasibility) of using the following to host a Marklogic environment:
>
> •   Is it feasible to use Amazon Web Services just for storage,
> while the server is on premises (as opposed to having the server in the
> cloud as well)? I’m guessing this is possible, but would it really hurt
> performance?
> •   If you have a 3-server Marklogic cluster, does it make sense for
> them to connect to a single SAN storage, or should they each have their
> own SAN storage?
> •   Is it feasible to have a cluster where nodes in the cluster are
> located in different locations such as different states (assuming that
> data on one node will not be replicated on the other nodes)? Or would
> performance demands mean that the servers of a cluster should ideally
> (or preferably) reside in the same data center?
>
> Thanks,
>
> Dennis
> ___
> General mailing list
> General@developer.marklogic.com
> Manage your subscription at: 
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information
is intended solely for the use of the individual to whom it is addressed. Any
review, disclosure, copying, distribution, or use of this e-mail communication
by others is strictly prohibited. If you are not the intended recipient, please
notify us immediately by returning this message to the sender and delete all
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
Manage your subscription at: 
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Re-index a replica database

2015-12-01 Thread Wayne Feick
No, if you deconfigure replication, reindex the replica, and then
reconfigure replicaiton, the master will resychronize the replica and in
doing so discard all the reindexed fragments and replace them with what
it had in the first place.



On 12/01/2015 02:07 PM, David Gorbet wrote:
> Hmm, someone can correct me if I'm wrong, but if you disable replication, you 
> should then be able to then reindex the erstwhile replica cluster, then 
> re-enable replication. I just don't think you can reindex it while it's 
> acting as replica cluster. I defer to Wayne of course on this if I'm wrong...
>
> -Original Message-
> From: general-boun...@developer.marklogic.com 
> [mailto:general-boun...@developer.marklogic.com] On Behalf Of Whitby, Rob
> Sent: Tuesday, December 01, 2015 2:03 PM
> To: MarkLogic Developer Discussion 
> Subject: Re: [MarkLogic Dev General] Re-index a replica database
>
> Our setup consists of a master cluster and 2 slave clusters. We got 
> replacement hardware for the slaves, and configured them for db replication, 
> so had 4 slaves temporarily. We then realised that the timezone on the new 
> slaves was wrong, so updated it and restarted MarkLogic, but the indexes had 
> already been created with the wrong timezone causing incorrect results from 
> date range queries.
>
> I was after a way to reindex just the new slaves rather than having to 
> reindex the master cluster which I think (?) would have caused a complete 
> replication to all 4 slaves. In the end I deleted the replication config on 
> the new slaves, cleared the db, then added the replication config back.
>
> Also as it's possible to have different index config on master and slaves, I 
> thought reindexing a slave would be possible (the admin interface doesn't 
> help here btw - you can still click reindex but nothing happens).
>
> Cheers
> Rob
>
>
> 
> From: general-boun...@developer.marklogic.com 
> [general-boun...@developer.marklogic.com] on behalf of David Gorbet 
> [david.gor...@marklogic.com]
> Sent: 01 December 2015 19:43
> To: MarkLogic Developer Discussion
> Subject: Re: [MarkLogic Dev General] Re-index a replica database
>
> Out of interest, what's the scenario here? Thx.
>
> From: general-boun...@developer.marklogic.com 
> [mailto:general-boun...@developer.marklogic.com] On Behalf Of Wayne Feick
> Sent: Tuesday, December 01, 2015 11:06 AM
> To: general@developer.marklogic.com
> Subject: Re: [MarkLogic Dev General] Re-index a replica database
>
> No, there is no way to reindex a replica database.
>
> Wayne.
>
> On 11/25/2015 04:40 AM, Whitby, Rob wrote:
> Hi,
>
> Is it possible to force re-index a replica database? Ideally I'd like to do 
> it without re-indexing the master db and having it replicate all the content 
> over.
>
> Thanks
> Rob
>
>
>
>
> _______
>
> General mailing list
>
> General@developer.marklogic.com<mailto:General@developer.marklogic.com>
>
> Manage your subscription at:
>
> http://developer.marklogic.com/mailman/listinfo/general
>
>
>
> --
>
> Wayne Feick
>
> Principal Engineer
>
> MarkLogic Corporation
>
> wayne.fe...@marklogic.com<mailto:wayne.fe...@marklogic.com>
>
> Phone: +1 650 655 2378
>
> www.marklogic.com<http://www.marklogic.com>
>
>
>
> This e-mail and any accompanying attachments are confidential. The information
>
> is intended solely for the use of the individual to whom it is addressed. Any
>
> review, disclosure, copying, distribution, or use of this e-mail communication
>
> by others is strictly prohibited. If you are not the intended recipient, 
> please
>
> notify us immediately by returning this message to the sender and delete all
>
> copies. Thank you for your cooperation.
>
> _______
> General mailing list
> General@developer.marklogic.com
> Manage your subscription at: 
> http://developer.marklogic.com/mailman/listinfo/general
> ___
> General mailing list
> General@developer.marklogic.com
> Manage your subscription at: 
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information
is intended solely for the use of the individual to whom it is addressed. Any
review, disclosure, copying, distribution, or use of this e-mail communication
by others is strictly prohibited. If you are not the intended recipient, please
notify us immediately by returning this message to the sender and delete all
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
Manage your subscription at: 
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Re-index a replica database

2015-12-01 Thread Wayne Feick
No, there is no way to reindex a replica database.

Wayne.


On 11/25/2015 04:40 AM, Whitby, Rob wrote:
> Hi,
>
> Is it possible to force re-index a replica database? Ideally I’d like
> to do it without re-indexing the master db and having it replicate all
> the content over.
>
> Thanks
> Rob
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> Manage your subscription at: 
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information
is intended solely for the use of the individual to whom it is addressed. Any
review, disclosure, copying, distribution, or use of this e-mail communication
by others is strictly prohibited. If you are not the intended recipient, please
notify us immediately by returning this message to the sender and delete all
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
Manage your subscription at: 
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Problem with verifying SSL certificates

2015-10-28 Thread Wayne Feick
Did you place the certificate authority that signed the certificate into
your security database as a trusted authority? If not, you can do that
through the Admin UI.

Wayne.


On 10/28/2015 09:40 AM, Short, Brendan wrote:
> Hello all,
>
> I’m attempting to make an API call using xdml:http-get and having a
> problem with the SSL certificate verification. Here’s the code I’m using:
>
> *let* *$accountInfo* := /xdmp:http-get/(*$baseURI*,
>  
>
>  username
>  password
>
>true
>  )
>
> And here’s the error message I get:
>
> System ID: /Users/bshort/Documents/Digital objects/api
> testing/get_account_test.xquery
> Severity: error
> Description: SVC-SOCCONN:
> xdmp:http-get("https://api-publisher.mirror-image.com:443/v5/
> <https://api-publisher.mirror-image.com/v5/>",  xmlns="xdmp:http"> method="basic">nejm) --
> Socket connect error: SSL_connect
> 172.16.12.139:52345-104.131.189.21:443: certificate verify failed
> (0x14090086)
> Start location: 13:0
>
> We’ve verified that the certificates are valid and that the call isn’t
> being blocked by our firewall (the trace showed the connection
> reaching from the firewall to the IP address). In addition, when I
> tried to replicate the call through other tools (including Postman and
> Python’s Requests module), I am able to make the call without errors.
> Finally, if I set verify-cert to false, the call goes through. We’re
> at the point where we think this might be a MarkLogic bug of some
> kind. Has anyone out there run into a problem like this before?
>
> Thanks very much,
> -Brendan Short
>
> *Brendan Short  **|  /Team Leader, Content Systems  /| ** NEJM** **Group*
> *860 Winter Street, Waltham, MA 02451  |  781-434-7166
>  | *bsh...@mms.org <mailto:bsh...@mms.org>
>
>
>
> This email message is a private communication. The information
> transmitted, including attachments, is intended only for the person or
> entity to which it is addressed and may contain confidential,
> privileged, and/or proprietary material. Any review, duplication,
> retransmission, distribution, or other use of, or taking of any action
> in reliance upon, this information by persons or entities other than
> the intended recipient is unauthorized by the sender and is
> prohibited. If you have received this message in error, please contact
> the sender immediately by return email and delete the original message
> from all computer systems. Thank you.
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> Manage your subscription at: 
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information
is intended solely for the use of the individual to whom it is addressed. Any
review, disclosure, copying, distribution, or use of this e-mail communication
by others is strictly prohibited. If you are not the intended recipient, please
notify us immediately by returning this message to the sender and delete all
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
Manage your subscription at: 
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] xdmp:forest-clear

2015-10-02 Thread Wayne Feick
When you clear a forest in a database you're running against, the clear
operation is run asynchronously. Your query of test.xml is running prior
to the clear actually happening.

Wayne.


On 10/02/2015 08:35 AM, Andreas Hubmer wrote:
> Hello,
>
> I've found an issue with xdmp:forest-clear. 
>
> xdmp:document-insert("test.xml", );
> xdmp:forest-clear(xdmp:database-forests(xdmp:database()));
> doc("test.xml") (: should be empty :)
>
> When executing the above multi statement transaction, I would expect
> an empty result. But instead  is returned. 
> When I execute doc("test.xml") manually some moments later, the
> expected empty result is returned.
>
> As a workaround I could use xdmp:document-delete(cts:uris()) but my
> assumption is that xdmp:forest-clear is much faster. Is that true?
>
> Is xdmp:forest-clear somehow asynchronous?
> Or is it maybe a visibility problem with deleted data? I run the code
> as admin user (just for testing).
>
> Regards,
> Andreas
>
> -- 
> Andreas Hubmer
> IT Consultant
>
> EBCONT enterprise technologies GmbH 
> Millennium Tower
> Handelskai 94-96
> A-1200 Vienna
>
> Web: http://www.ebcont.com
>
> OUR TEAM IS YOUR SUCCESS
>
> UID-Nr. ATU68135644
> HG St.Pölten - FN 399978 d
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> Manage your subscription at: 
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information
is intended solely for the use of the individual to whom it is addressed. Any
review, disclosure, copying, distribution, or use of this e-mail communication
by others is strictly prohibited. If you are not the intended recipient, please
notify us immediately by returning this message to the sender and delete all
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
Manage your subscription at: 
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Mount Db error while ML upgrade from 6 to 8

2015-06-04 Thread Wayne Feick
Hi Indy,

It sounds like you've hit a short lived bug where the indexing
information for a serialized query was improperly encoded.

Are you under a support contract? If so, that's the best way to resolve
the issue. If not, we can discuss privately, as the fix involves a tweak
to the underlying files.

Wayne.


On 06/04/2015 12:35 AM, Indrajeet Verma wrote:
> Hi,
>
> Please help me if somebody has faced below issue while upgrading ML
> from 6 to 8.
>
> This forest is unavailable due to the following error:
> XDMP-FORESTERR: Error in startup of forest FilingsTool-Content:
> XDMP-RECOVERY: Recovery error on forest FilingsTool-Content after
> 27697 redo records -- {{fsn=631207, chksum=0xacd80f40, words=1837},
> op=insert, time=1430744401, mfor=4421201772788570268,
> mtim=1416568023807, mfsn=631207, fmcl=17422642808301536264,
> fmf=4421201772788570268, fmt=1416568023807, fmfsn=631207,
> sk=3735405548654489927} XDMP-FORESTNOT: Forest FilingsTool-Content not
> available: XDMP-FORESTERR: Error in merge of forest
> FilingsTool-Content: XDMP-BAD: Bad expr index
>
> Regards,
> Indy
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> Manage your subscription at: 
> http://developer.marklogic.com/mailman/listinfo/general


-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information
is intended solely for the use of the individual to whom it is addressed. Any
review, disclosure, copying, distribution, or use of this e-mail communication
by others is strictly prohibited. If you are not the intended recipient, please
notify us immediately by returning this message to the sender and delete all
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
Manage your subscription at: 
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Forest Warning

2015-05-11 Thread Wayne Feick
Creating more forests won't help, because they'll all have the same
in-memory limits and the rebalancer will run into the same issue as the
reindexer if it tries to move the document to a different forest.

Wayne.


On 05/11/2015 11:18 AM, Indrajeet Verma wrote:
> Shashi,
>
> I am sure, this file size must be large. 
>
> This is not good that you have created only one forest. There should
> be more than that. I would recommend around 8-10 forests.
>
> However these should be based on core of CPUs. per 2 core 1 forest but
> if your server is having 32 cores, 8-10 forest should be sufficient.
> Please correct somebody if I am wrong.
>
> As per my understanding you should to do following things to solve
> your problem,
>
> 1. Delete this large file size. I would not recommend to increase
> memory size just for the heck. Your performance will be degraded.
>
> 2. Create more forest and attach to the database. if you are using 7+,
> this automatically re-balance the data.
>
> 3. After re-balancing the content, you might perform manual merge to
> claim memory immediately. 
>
> 3. Split your files into smaller size
>
> Regards,
> Indy
>
>
> On Mon, May 11, 2015 at 11:30 PM, Wayne Feick
> mailto:wayne.fe...@marklogic.com>> wrote:
>
> Looping in some additional information from private email. Since
> your list size is already configured to the maximum (32768), you
> could try to identify some index settings that you don't actually
> need and turn them off.
>
> If that isn't an option, you could try breaking it up into
> multiple documents, and then deleting the original document with
> xdmp:document-delete().
>
> Wayne.
>
>
>
> On 05/11/2015 10:52 AM, Wayne Feick wrote:
>> Hi Shashidhar,
>>
>> It sounds like the document was close to the limit when it was
>> originally ingested, and that turning on additional index setting
>> put it over the top.
>>
>> The error message says that your in-memory list storage is full,
>> so if you go to the Admin UI and look at the database settings,
>> you'll see an entry for "in memory list size". Configure a larger
>> value there and you should be able to finish your reindex.
>>
>> Wayne.
>>
>>
>> On 04/23/2015 01:28 AM, Shashidhar Rao wrote:
>>> Hi,
>>>
>>> Can somebody help me how to fix this issue
>>>
>>> There is currently an XDMP-FORESTERR: Error in reindex of forest
>>> PROD_DB_1: XDMP-REINDEX: Error reindexing
>>> fn:doc("/home/data/Folder2/US07625699-20091201-T2.XML"):
>>> XDMP-FRAGTOOLARGE: Fragment of
>>> /home/data/Folder2/US07625699-20091201-T2.XML too large for
>>> in-memory storage: XDMP-INMMLISTFULL: In-memory list storage
>>> full; list: table=100%, wordsused=50%, wordsfree=25%,
>>> overhead=25%; tree: table=0%, wordsused=6%, wordsfree=94%,
>>> overhead=0% exception. Information on this page may be missing.
>>>
>>> It says US07625699-20091201-T2.XML too large.
>>> what are the other options any suggestions would be helpful.
>>>
>>> Is deleting this file an option as the last resort?
>>>
>>> Thanks
>>>
>>>
>>>
>>> ___
>>> General mailing list
>>> General@developer.marklogic.com <mailto:General@developer.marklogic.com>
>>> Manage your subscription at: 
>>> http://developer.marklogic.com/mailman/listinfo/general
>>
>> -- 
>> Wayne Feick
>> Principal Engineer
>> MarkLogic Corporation
>> wayne.fe...@marklogic.com <mailto:wayne.fe...@marklogic.com>
>> Phone: +1 650 655 2378 
>> www.marklogic.com <http://www.marklogic.com>
>>
>> This e-mail and any accompanying attachments are confidential. The 
>> information
>> is intended solely for the use of the individual to whom it is 
>> addressed. Any
>> review, disclosure, copying, distribution, or use of this e-mail 
>> communication
>> by others is strictly prohibited. If you are not the intended recipient, 
>> please
>> notify us immediately by returning this message to the sender and delete 
>> all
>> copies. Thank you for your cooperation.
>>
>>
>> ___
>> General mailing list
>> General@developer.marklogic.com <m

Re: [MarkLogic Dev General] Forest Warning

2015-05-11 Thread Wayne Feick
Hi Shashidhar,

It sounds like the document was close to the limit when it was
originally ingested, and that turning on additional index setting put it
over the top.

The error message says that your in-memory list storage is full, so if
you go to the Admin UI and look at the database settings, you'll see an
entry for "in memory list size". Configure a larger value there and you
should be able to finish your reindex.

Wayne.


On 04/23/2015 01:28 AM, Shashidhar Rao wrote:
> Hi,
>
> Can somebody help me how to fix this issue
>
> There is currently an XDMP-FORESTERR: Error in reindex of forest
> PROD_DB_1: XDMP-REINDEX: Error reindexing
> fn:doc("/home/data/Folder2/US07625699-20091201-T2.XML"):
> XDMP-FRAGTOOLARGE: Fragment of
> /home/data/Folder2/US07625699-20091201-T2.XML too large for
> in-memory storage: XDMP-INMMLISTFULL: In-memory list storage full;
> list: table=100%, wordsused=50%, wordsfree=25%, overhead=25%; tree:
> table=0%, wordsused=6%, wordsfree=94%, overhead=0% exception.
> Information on this page may be missing.
>
> It says US07625699-20091201-T2.XML too large.
> what are the other options any suggestions would be helpful.
>
> Is deleting this file an option as the last resort?
>
> Thanks
>
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> Manage your subscription at: 
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information
is intended solely for the use of the individual to whom it is addressed. Any
review, disclosure, copying, distribution, or use of this e-mail communication
by others is strictly prohibited. If you are not the intended recipient, please
notify us immediately by returning this message to the sender and delete all
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
Manage your subscription at: 
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Forest Warning

2015-05-11 Thread Wayne Feick
Looping in some additional information from private email. Since your
list size is already configured to the maximum (32768), you could try to
identify some index settings that you don't actually need and turn them off.

If that isn't an option, you could try breaking it up into multiple
documents, and then deleting the original document with
xdmp:document-delete().

Wayne.


On 05/11/2015 10:52 AM, Wayne Feick wrote:
> Hi Shashidhar,
>
> It sounds like the document was close to the limit when it was
> originally ingested, and that turning on additional index setting put
> it over the top.
>
> The error message says that your in-memory list storage is full, so if
> you go to the Admin UI and look at the database settings, you'll see
> an entry for "in memory list size". Configure a larger value there and
> you should be able to finish your reindex.
>
> Wayne.
>
>
> On 04/23/2015 01:28 AM, Shashidhar Rao wrote:
>> Hi,
>>
>> Can somebody help me how to fix this issue
>>
>> There is currently an XDMP-FORESTERR: Error in reindex of forest
>> PROD_DB_1: XDMP-REINDEX: Error reindexing
>> fn:doc("/home/data/Folder2/US07625699-20091201-T2.XML"):
>> XDMP-FRAGTOOLARGE: Fragment of
>> /home/data/Folder2/US07625699-20091201-T2.XML too large for
>> in-memory storage: XDMP-INMMLISTFULL: In-memory list storage full;
>> list: table=100%, wordsused=50%, wordsfree=25%, overhead=25%; tree:
>> table=0%, wordsused=6%, wordsfree=94%, overhead=0% exception.
>> Information on this page may be missing.
>>
>> It says US07625699-20091201-T2.XML too large.
>> what are the other options any suggestions would be helpful.
>>
>> Is deleting this file an option as the last resort?
>>
>> Thanks
>>
>>
>>
>> ___
>> General mailing list
>> General@developer.marklogic.com
>> Manage your subscription at: 
>> http://developer.marklogic.com/mailman/listinfo/general
>
> -- 
> Wayne Feick
> Principal Engineer
> MarkLogic Corporation
> wayne.fe...@marklogic.com
> Phone: +1 650 655 2378
> www.marklogic.com
>
> This e-mail and any accompanying attachments are confidential. The information
> is intended solely for the use of the individual to whom it is addressed. Any
> review, disclosure, copying, distribution, or use of this e-mail communication
> by others is strictly prohibited. If you are not the intended recipient, 
> please
> notify us immediately by returning this message to the sender and delete all
> copies. Thank you for your cooperation.
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> Manage your subscription at: 
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information
is intended solely for the use of the individual to whom it is addressed. Any
review, disclosure, copying, distribution, or use of this e-mail communication
by others is strictly prohibited. If you are not the intended recipient, please
notify us immediately by returning this message to the sender and delete all
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
Manage your subscription at: 
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] XDMP-TOOMANYSTANDS

2015-05-01 Thread Wayne Feick
Shashidhar,

It's not necessary to limit to 1 forest to avoid exceeding your
hardware. Forests are just partitions of your database, much like stands
are partitions of a forest. For a given number of documents in your
database, your disk footprint will be comparable, whether you have 1
forest or 6.

If you add additional forests to your database, the rebalancer will
spread documents across all of the forests. This will, however, consume
a fair bit of I/O since the rebalancer is transactional and will create
new stands in the new forests, which will then merge into large stands
in those forests. As documents are rebalanced out of your current
forest, merges will also occur there to reclaim disk space by removing
the deleted documents.

An additional benefit of adding more forests is that you will be able to
ingest new documents at a higher rate due to increased parallelism
across the forests.

Wayne.



On 05/01/2015 06:17 PM, Shashidhar Rao wrote:
> Hi David,
>
> Due to hardware limitations and the disk space I had to stick to 1
> forest  surely I will note of your suggestions once it becomes a full
> project now it is just a POC with limited hardware.
>
> Thanks
>
> On Sat, May 2, 2015 at 6:18 AM, Shashidhar Rao
> mailto:raoshashidhar...@gmail.com>> wrote:
>
> Hi ,
>
> Thanks all for your suggestions, now setting to 64GB MAX MERGE
> instead of 32 GB has improved my situation. Merging is constantly
> happening.
>
> But then I had configured too many features, 2,1 character
> searches and field positions and values which in the docs says it
> produces which already had run to almost 70% which I figured it
> out that I dont need these features.
>
> Now Marklogic has started  deleting and started reclaiming those
> spaces and the spaces have increased.
>
> Now from 62 strands it has reduced to 50 and I have only  1 Forest .
>
> Thanks
>
> On Fri, May 1, 2015 at 6:06 PM, Shashidhar Rao
> mailto:raoshashidhar...@gmail.com>>
> wrote:
>
> Hi ,
>
> Based on the below link this
> https://docs.marklogic.com/8.0/messages/XDMP-en/XDMP-TOOMANYSTANDSHi
>
> for the above too many stands I have set the
>
> MAX-MERGE-SIZE to 64GB instead of 32 GB
>
> Can someone help me whether this 64gb is ok
>
> Thanks
>
>
>
>
>
>
> _______
> General mailing list
> General@developer.marklogic.com
> Manage your subscription at: 
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information
is intended solely for the use of the individual to whom it is addressed. Any
review, disclosure, copying, distribution, or use of this e-mail communication
by others is strictly prohibited. If you are not the intended recipient, please
notify us immediately by returning this message to the sender and delete all
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
Manage your subscription at: 
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] cluster quorum

2014-12-19 Thread Wayne Feick
There are configurations and sequences of events that can lead to
forests remaining online when there are N/2 or fewer hosts online.

In the simplest case, if you have a forest that is not configured for
either local or shared disk failover, as long as the forest's host is up
the forest will be available regardless of any quorum issues. In this
case, as long as a database's forests are available, the database will
be available.

For local disk failover, forests that are "sync replicating" will
transition to "open" in response to a host failure that makes the "open"
forest inaccessible, but an "open" forest will not go offline in
response to some other host failing. However, once you lose quorum, no
forests will failover anymore if you lose another host.

Given that, and depending on how your forests are distributed and the
order of host failures, it's possible that you can remain online even
though enough hosts have failed that you no longer have quorum. You just
can't rely on staying online with that many failed hosts.

Databases with many forests spread across many hosts typically can't
stay online if you lose quorum because some forest(s) will become
unavailable.

What sort of failover do you have configured?

Wayne.



On 12/19/2014 08:00 AM, Whitby, Rob wrote:
> thanks Danny,
>
> so what triggers the voting? Not having all the forests available?
>
>
>
> From: Danny Sinang mailto:d.sin...@gmail.com>>
> Reply-To: MarkLogic Developer Discussion
> mailto:general@developer.marklogic.com>>
> Date: Friday, 19 December 2014 15:44
> To: MarkLogic Developer Discussion  <mailto:general@developer.marklogic.com>>
> Subject: Re: [MarkLogic Dev General] cluster quorum
>
> The downed hosts probably did not contain the master forests, so no
> quorum vote was called for.
>
> We have the same setup. We can even go to just having one host alive
> and still serve data.
>
> Regards,
> Danny
>
> Sent from my iPhone
>
> On Dec 19, 2014, at 10:39 AM, Whitby, Rob  <mailto:rob.whi...@springer.com>> wrote:
>
>> Hi,
>>
>> I have a cluster with 6 hosts. I stopped 3 of the hosts and the
>> remaining 3 are still accepting reads/writes.
>>
>> Why is this? I was expecting the quorum not to be reached because we
>> don’t have >50% of hosts available.
>>
>> https://help.marklogic.com/knowledgebase/article/View/119/0/start-up-quorum-and-forest-level-failover
>>
>> Cheers
>> Rob
>>
>> ___
>> General mailing list
>> General@developer.marklogic.com <mailto:General@developer.marklogic.com>
>> http://developer.marklogic.com/mailman/listinfo/general
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Is there any way to restrict Marklogic search on specific version of the document.?

2014-12-02 Thread Wayne Feick
In shipping versions, the dls:latest element is used. Beginning in 8,
there is a collection to avoid a property join; this improves
performance when there is a large number of old versions of documents.

An early access version of 8 is available.

Wayne.


On 12/02/2014 07:44 AM, David Ennis wrote:
> HI.
>
> I think the answer is not too difficult if you keep in mind that the
> magic of DLS is partially exposed by way of properties on your documents:
>
> From the top of my head, I believe that in your search, also search
> the properties fragment for dls data that you can use to isolate what
> you want.
>
> Look at the properties of one of your latest versions and see exactly
> what you can isolate on, but I believe the dls:latest element will be
> your friend here.
>
> Kind Regards,
> David Ennis
>
>
>
>
> David Ennis
> *Content Engineer*
>
> HintTech  <http://www.hinttech.com/>
> Mastering the value of content
> creative | technology | content
>
> Delftechpark 37i
> 2628 XJ Delft
> The Netherlands
> T: +31 88 268 25 00
> M: +31 63 091 72 80 
>
> http://www.hinttech.com <http://www.hinttech.com> [HintTech Twitter]
> <https://twitter.com/HintTech> [HintTech Facebook]
> <http://www.facebook.com/HintTech> [HintTech LinkedIn]
> <http://www.linkedin.com/company/HintTech>
>
> On 2 December 2014 at 16:28, shruti kapoor  <mailto:shrutikapoor@gmail.com>> wrote:
>
> Hi all,
>
>
>
> I store managed documents into a specific collection in marklogic
> through dls library. I want search to include only current version
> documents. Is there any way to do it using search:search API?
>
> I know how to do it using cts:search(). I know one option is to
> push old versions into different collection and current version
> into different. Search only on collection with current version.
> For some reasons I don't want to do this. Are there any other ways
> of doing it?
>
>
> -- 
>
> Regards, 
> *Shruti Kapoor*
>
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> <mailto:General@developer.marklogic.com>
>     http://developer.marklogic.com/mailman/listinfo/general
>
>
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Geo-spatial API - Roadside Distance

2014-11-12 Thread Wayne Feick
No, that would require base maps to have knowledge of the road system,
which is not part of the product.

MarkLogic provides a lower level API to search for content based on
geospatial input (e.g. circle, polygon, distance from point), as well as
an alerting system that can trigger when new content is inserted that
matches geospatial (and/or other non-geospatial) queries.

You can learn more by looking over the documentation.

http://docs.marklogic.com/guide/search-dev/geospatial

However, you could store base map information in MarkLogic, and then
build an application that uses geospatial queries to retrieve the
relevant road information that would be used by an application to do
distance calculations.

Wayne.


On 11/12/2014 08:25 PM, abhishek.srivas...@cognizant.com wrote:
> Does MarkLogic Geo-spatial APIs provide road distance between two pair of 
> latitude and longitude.
>
> Please suggest...
> Thanks
> Abhishek
> This e-mail and any files transmitted with it are for the sole use of the 
> intended recipient(s) and may contain confidential and privileged 
> information. If you are not the intended recipient(s), please reply to the 
> sender and destroy all copies of the original message. Any unauthorized 
> review, use, disclosure, dissemination, forwarding, printing or copying of 
> this email, and/or any action taken in reliance on the contents of this 
> e-mail is strictly prohibited and may be unlawful. Where permitted by 
> applicable law, this e-mail and other e-mail communications sent to and from 
> Cognizant e-mail addresses may be monitored.
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Adding new index to replicated ML servers

2014-09-29 Thread Wayne Feick
A refinement to that. The replica database will generate index
information only when it receives fragments from the master database,
and does not do any reindexing on its own.

That said, the advice of changing the replica first and then the master
is correct. When the master does its reindexing, it will replicate the
reindexed fragments to the replica database, which will then also
generate new index information.

If you change the master's settings before the replica's, you're in
danger of having improperly indexed fragments on the replica. As a
result, you may not be able to use some of the range indexes in queries
on the replica. If/when you deconfigure database replication, the
replica database would then reindex as needed.

Wayne.



On 09/28/2014 11:20 PM, Michael Blakeley wrote:
> Change the replica first, then the master. Each will index independently.
>
> -- Mike
>
> On Sep 28, 2014, at 23:04, vi...@tilaton.fi <mailto:vi...@tilaton.fi>
> wrote:
>
>> I tried to search for the answer, but couldn't find it, I would
>> appreciate if any of you could direct me to the correct chunk of
>> documentation to find this out:
>>  
>> We have two node ML7 setup where one of the nodes is master and other
>> slave using database replication matched by the database name. (This
>> is NOT flexible replication)
>>  
>> We execute queries to both hosts 24/7 but obviously write only to the
>> primary one.
>>  
>> We often need to add new indexes to the system and the (multi
>> part) question is:
>>  
>> 1. Does ML replicate the index definitions between hosts?
>> 2. Does ML replicate the indexes themselves between the hosts?
>> 3. What's the correct procedure to add a new index in this setup?
>>  
>> We currently add the indexes by hand, starting from the slave. We
>> noticed that the slave does not initiate automatic reindexing, even
>> though we have "reindexer enable" set to true and figured it's better
>> to add the new index to slave first so that it will be there when the
>> primary starts reindexing.
>>  
>> So, is it required at all to add the indexes first to slave, or will
>> ML take care of the configuration changes via the replication also or
>> does that include only data?
>>  
>> Ville
>> ___
>> General mailing list
>> General@developer.marklogic.com <mailto:General@developer.marklogic.com>
>> http://developer.marklogic.com/mailman/listinfo/general
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] unable to merge all deleted fragments out of the database

2014-09-26 Thread Wayne Feick
If you pass an explicit timestamp to xdmp:merge() (e.g. the return value
of xdmp:request-timestamp()) the server will discard all deleted fragments.

The hour window was added to avoid XDMP-OLDSTAMP errors that had cropped
up in some of our stress testing, most commonly for replica databases,
but also causing transaction retries for non-replica databases.

We've done some tuning of the change since then (e.g. not holding on to
the last hour of deleted fragments after a reindex), and we may do some
further tuning so this is less surprising to people.

Wayne.



On 09/26/2014 09:41 AM, Michael Blakeley wrote:
> >From what I've seen there now seems to be an offset from the merge 
> >timestamp. I think it's about an hour. No idea why this was introduced, but 
> >try waiting an hour and then merge again.
>
> -- Mike
>
> On 26 Sep 2014, at 08:33 , Brent Hartwig  wrote:
>
>> Hello, and Happy Friday!
>>  
>> While attempting to calculate an expansion ratio, a client and I initiated 
>> manual merges of a ML 7.0-3 database, then recorded the size.  We did see 
>> the database size decrease, as well as merge-related entries in the log; 
>> however, the number of deleted fragments never reached zero.  This is 
>> different than I remember from previous versions.  A scan of release notes 
>> for versions 5, 6, and 7 didn’t turn up anything.
>>  
>> The merge timestamp is zero, and the system would have been relatively quiet 
>> when merge was requested.
>>  
>> The number of remaining deleted fragments was relatively low, and did 
>> decrease after some merges, just never to zero.  Given the size of our 
>> documents, these few deleted fragments would not have significantly skewed 
>> the results.  Nonetheless, curiosity has the better of me.
>>  
>> After the second test of the day, there were ~65K documents taking up ~150 
>> MB.  There were 879 deleted fragments after a merge.
>>  
>> Might this be a display bug, or does the merge process ultimately decide 
>> just how far to go?
>>  
>> Thanks much.
>>  
>> -Brent
>>  
>> Brent Hartwig, Solutions Architect | RSI Content Solutions
>> ___
>> General mailing list
>> General@developer.marklogic.com
>> http://developer.marklogic.com/mailman/listinfo/general
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.


___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] LDAP MarkLogic - Admin access

2014-08-28 Thread Wayne Feick
The fix will be in 7.0-4, which should be out in the next month or so
(with the usual disclaimer that we don't preannounce release dates,
plans can change, etc.).

Wayne.


On 08/27/2014 10:27 PM, Wayne Feick wrote:
> Hi Abhishek,
>
> That looks like a bug in xdmp:user-last-login(). I've filed it (bug
> 28591), and assigned to the appropriate developer.
>
> I don't have a good work around at the moment, other than to use an
> internal user on the Admin UI.
>
> Thanks for finding this.
>
> Wayne.
>
>
> On 08/27/2014 07:38 PM, abhishek.srivas...@cognizant.com wrote:
>> Hi All,
>>
>> Trying to connect to use admin UI over new port and external LDAP security 
>> but getting below exception. the appserver setup is exactly same as 8001 
>> (Admin appserver) except internal-security as false and external-security 
>> pointing to specified external security.
>>
>> 500 Internal Server Error
>>
>> SEC-USERDNE: xdmp:user-last-login() User does not eists
>>
>>
>> Thanks
>> Abhishek
>> This e-mail and any files transmitted with it are for the sole use of the 
>> intended recipient(s) and may contain confidential and privileged 
>> information. If you are not the intended recipient(s), please reply to the 
>> sender and destroy all copies of the original message. Any unauthorized 
>> review, use, disclosure, dissemination, forwarding, printing or copying of 
>> this email, and/or any action taken in reliance on the contents of this 
>> e-mail is strictly prohibited and may be unlawful. Where permitted by 
>> applicable law, this e-mail and other e-mail communications sent to and from 
>> Cognizant e-mail addresses may be monitored.
>> ___
>> General mailing list
>> General@developer.marklogic.com
>> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] LDAP MarkLogic - Admin access

2014-08-27 Thread Wayne Feick
Hi Abhishek,

That looks like a bug in xdmp:user-last-login(). I've filed it (bug
28591), and assigned to the appropriate developer.

I don't have a good work around at the moment, other than to use an
internal user on the Admin UI.

Thanks for finding this.

Wayne.


On 08/27/2014 07:38 PM, abhishek.srivas...@cognizant.com wrote:
> Hi All,
>
> Trying to connect to use admin UI over new port and external LDAP security 
> but getting below exception. the appserver setup is exactly same as 8001 
> (Admin appserver) except internal-security as false and external-security 
> pointing to specified external security.
>
> 500 Internal Server Error
>
> SEC-USERDNE: xdmp:user-last-login() User does not eists
>
>
> Thanks
> Abhishek
> This e-mail and any files transmitted with it are for the sole use of the 
> intended recipient(s) and may contain confidential and privileged 
> information. If you are not the intended recipient(s), please reply to the 
> sender and destroy all copies of the original message. Any unauthorized 
> review, use, disclosure, dissemination, forwarding, printing or copying of 
> this email, and/or any action taken in reliance on the contents of this 
> e-mail is strictly prohibited and may be unlawful. Where permitted by 
> applicable law, this e-mail and other e-mail communications sent to and from 
> Cognizant e-mail addresses may be monitored.
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] XDMP-REVIDXBADQRY: Reverse index bad query: and-query

2014-07-07 Thread Wayne Feick
With fast-reverse-searches=false, all we store in the index is the fact 
that there was a serialized query and any use of cts:reverse-query() 
will run entirely in the filter. Obviously, this will not perform well.


Re: how you can get around ill-formed queries with 
fast-reverse-searches=true, your approach of changing the namespace is 
what I going to suggest. When you're ready to fill in the term, rewrite 
the document in the cts namespace and the query will go into the reverse 
index.


Wayne.


On 07/04/2014 02:23 PM, William Sawyer wrote:


That's what I was thinking as well.  I ended up just removing the cts 
namespace and then add it back when evaulating the placeholders.


Thanks,
-Will

On Jul 4, 2014 2:32 PM, "Gavin Haydon" 
<mailto:gavin.hay...@pressassociation.com>> wrote:


Hi

I suspect that the fast reverse index is attempting to index all
serialised cts query elements that are saved into the database.
This way it can support those fast reverse queries you might want
to perform, and find the fragments containing the matching cts query.

Your serialised cts query has been modified to contain
placeholders, and as such is no longer valid against the schema
for cts query, or the expectations of the fast index. Could be
that you simply cant fast index customised cts queries?

The standard reverse index seems to be more tolerant (and is
slower), relying heavily upon filtering to remove the false
positives. It may not run the same checks upon a save.

Hope this helps.
Regards
Gavin Haydon

Sent from my iPad

On 4 Jul 2014, at 20:30, "William Sawyer" mailto:wilby.saw...@gmail.com>> wrote:


I am trying to save a serialized cts:query with placeholder
values into a document and it keeps throwing an XDMP-REVIDXBADQRY
error.

Example.


http://marklogic.com/cts";>

node
type

exact


http://test.com/meta";>meta:meta
id

exact





I found the document will save if I remove my placeholder
elements.  Also found if I turn off "fast reverse searches" index
then it saves. I want to be able to quickly replace any
placeholder values and then pass it into cts:query().  Is there a
way to keep that index on and keep my placeholder elements?

ML: 7.0-2.3

Thanks,
-Will
___
General mailing list
General@developer.marklogic.com
<mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general


This email is from the Press Association. For more information,
see www.pressassociation.com <http://www.pressassociation.com>.
This email may contain confidential information. Only the
addressee is permitted to read, copy, distribute or otherwise use
this email or any attachments. If you have received it in error,
please contact the sender immediately. Any opinion expressed in
this email is personal to the sender and may not reflect the
opinion of the Press Association. Any email reply to this address
may be subject to interception or monitoring for operational
reasons or for lawful business practices.

___
General mailing list
General@developer.marklogic.com
<mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general



___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] New Feature Request: Unique Value Range Indexes

2014-06-04 Thread Wayne Feick
Fair points, Ron. We have RFE 2322 filed back in Feb 2012 to track this. 
I'll add a note indicating your interest as well.


Wayne.


On 06/04/2014 03:00 PM, Ron Hitchens wrote:


Wayne,

   Thanks for this.  It's a useful code pattern for this sort of thing 
and I will probably use it for the specific requirement I have at the 
moment (I was planning to do something similar anyway).


   But this code, or any user-level code, does not fully implement the 
uniqueness guarantee I'd like to have and that I think a specialized 
range index could easily provide.  This will work, but as you say it 
would be necessary to always use this code convention.  It would not 
prevent creation of duplicate values by code that doesn't follow the 
convention.  If uniqueness were enforced by the index, then I could be 
confident that uniqueness is absolutely guaranteed and I don't need to 
trust anyone (including my future self) to always follow the same 
locking protocol.


---
Ron Hitchens {r...@overstory.co.uk <mailto:r...@overstory.co.uk>} +44 
7879 358212


On Jun 4, 2014, at 9:19 PM, Wayne Feick <mailto:wayne.fe...@marklogic.com>> wrote:


The simplest is to have the document URI correspond to the element 
value, and if you can use a random value it's good for concurrency.


If you can't do that, but you want to ensure only one document can 
have a particular value for an element, I think it's pretty easy 
using xdmp:lock-for-update() on an URI that corresponds to the 
element value. You don't actually need to create a document at that 
URI, just use it to serialize transactions. Here's one way to do it.


declare function lock-element-value($qn as xs:QName, $v as item)
{
   xdmp:lock-for-update(
 "http://acme.com/";
 || xdmp:hash64(fn:namespace-uri-from-QName($qn))
 || "/"
 || xdmp:hash64(fn:localname-from-QName($qn)))
};

You'd then do something like the following.

let $lock := lock-element-value($qn, $v)
let $existing := cts:search(fn:collection(), cts:element-range-query($qn, "=", $v, 
"unfiltered"))
return
   if (fn:exists($existing))
   then ... do whatever you need to do with the existing document
   else ... create a new document, safe from a race with another transaction

You'd want to use lock-element-value() in any updates that could 
affect a change in the element value (insert, update, delete). I 
think you could get away with ignoring deletes since those would 
automatically serialize with any transaction that would modify the 
existing document.


We use this sort of pattern internally to ensure uniqueness of IDs.

Wayne.


On 06/04/2014 12:49 PM, Whitby, Rob wrote:

I thought 2 simultaneous transactions would both get read locks on the uri, 
then one would get a write lock and the other would fail and retry. Maybe I'm 
missing something though.

But anyway, I agree unique indexes would be a handy feature. e.g. our docs have 
a DOI element which *should* be unique but occasionally aren't, would be nice 
to enforce that rather than have to code defensively.

Rob

From:general-boun...@developer.marklogic.com  
[general-boun...@developer.marklogic.com] on behalf of Ron Hitchens 
[r...@ronsoft.com]
Sent: 04 June 2014 19:31
To: MarkLogic Developer Discussion
Subject: Re: [MarkLogic Dev General] New Feature Request: Unique Value Range
Indexes

Rob,

I believe there is a race condition here.  A document may not exit as-of 
the timestamp when this request starts running, but some other request could 
create one while it's running.  This request would then over-write that 
document.

I'm actually more concerned about element values inside documents than 
generating unique document URIs.  It's easy to generate document URIs with 
64-bit random numbers that are very unlikely to collide.  But I want to 
guarantee that some meaningful value inside a document is unique across all 
documents.

In my case, the naming space is actually quite small because I want the IDs to be 
meaningful but unique.  For example "images:cats:fluffy:XX.png", where XX can 
increment or be set randomly until the ID is unique.  One way to check for uniqueness is 
to make the document URI from this ID, then test for an existing document.

But this doesn't solve the general problem.  I could conceivably have 
multiple elements in the document that I want to be unique.  To check for 
unique element values it's necessary to run a cts query against the element(s). 
 And I'm not sure if you can completely close the race window between checking 
for an existing instance and inserting a new one if the query comes back empty.

Someone from ML pointed out privately that checking for uniqueness in the 
index would require cross-cluster communication.  I

Re: [MarkLogic Dev General] New Feature Request: Unique Value Range Indexes

2014-06-04 Thread Wayne Feick
uments with the same ID wind up in the database.  I 
know how to accomplish this using locks (I'm pretty sure) but any such 
implementation is awkward and prone to subtle edge case errors, and can be 
difficult to test.

   It seems to me that this is something that MarkLogic could do much more 
reliably and quickly than any user-level code.  The thought that occurred to me 
is a variation on range indexes which only allow a single instance of any given 
value.

   Conventional range indexes work by creating term lists that look like this 
(see Jason Hunter's ML Architecture paper), where each term list contains an 
element (or attribute) value and a list of fragment IDs where that term exists.

aardvark | 23, 135, 469, 611
ant  | 23, 469, 558, 611, 750
baboon   | 53, 97, 469, 621
etc...

   By making a range index like this but which only allows a single fragment ID 
in the list, that would ensure that no two documents in the database contain a 
given element with the same value.  That is, attempting to add a second 
document with the same element or attribute value would cause an exception.  
And being a range index, it would provide a fast lexicon of all the current 
unique values in the DB.

   Such an index would look something like this:

abc3vk34 | 17
bkx46lkd | 52
bz1d34nm | 37
etc...

   Usage could be something like this:

declare function create-new-id-doc ($id-root as xs:string) as xs:string
{
try {
let $id := $id-root || "-" || mylib:random-string(8)
let $uri := "/idregistry/id-" || $id
let $_ :=
xdmp:document-insert ($uri,

{ $id }
{ fn:current-dateTime() }

 return $id
} catch (e) {
create-new-id-doc ($id-root)
}
};

   This doesn't require that I write any (possibly buggy) mutual exclusion code 
and I can be confident that once the xdmp:document-insert succeeds that the ID 
is unique in the database and that the type (as configured for the range index) 
is correct.

   Any love for Unique Value Range Indexes in the next version of MarkLogic?

---
Ron Hitchens {r...@overstory.co.uk}  +44 7879 358212

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general
___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general
___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Backup question

2014-05-31 Thread Wayne Feick
The problem with trying to do forest backups is that you won't have a 
consistent view of your database at a single point in time.

How large of a database is this?

Where are you backing up to? You could send to some sort of shared 
storage like a NAS, or Amazon S3. The latter would remove any concerns 
of having enough space.

Wayne.



On 05/31/2014 11:54 AM, Tim wrote:
> What about doing forest backups instead? Would I need to stagger them so they
> are not overlapping?
>
> -Original Message-
> From: general-boun...@developer.marklogic.com
> [mailto:general-boun...@developer.marklogic.com] On Behalf Of Wayne Feick
> Sent: Saturday, May 31, 2014 2:43 PM
> To: general@developer.marklogic.com
> Subject: Re: [MarkLogic Dev General] Backup question
>
> If you are doing a database backup, changing the number of forests shouldn't
> change anything.
>
> The size of a backup is the size of the database itself; our storage format is
> already very compressed, so no additional compression can be done as part of 
> the
> backup. The one difference is that we don't backup up the journal files
> themselves, so there is a little bit of space saving there.
>
> You don't mention whether you're doing a one off backup, or a scheduled 
> backup.
> For a scheduled backup, you configure the number of backups you want to 
> retain,
> and need space for N+1 backups because we won't remove the oldest one until 
> the
> current one completes.
>
> Wayne.
>
>
> On 05/31/2014 07:26 AM, Tim wrote:
>> Hi Folks,
>>
>> I have question about backups. We have a fairly large database (which
>> is currently using only one forest) and are running out of space when
>> performing backups.  I do not understand the backup process very well
>> and I don't know if it is just a matter of running out of space or if
>> it has to do with performing concurrent backups and the size of any given
> forest that is being backed up.
>> Could splitting the database up into multiple smaller forests fix the 
>> problem?
>>
>> The error log merely tells me that there is insufficient space
>> available, but I don't know how to measure that - in other words is
>> there a rule for calculate the necessary disk space based on the
>> database size and any degree of compression? Is it impacted by concurrent
> backups?
>> Thanks for any help with this!
>>
>> Tim Meagher
>>
>> ___
>> General mailing list
>> General@developer.marklogic.com
>> http://developer.marklogic.com/mailman/listinfo/general
> --
> Wayne Feick
> Principal Engineer
> MarkLogic Corporation
> wayne.fe...@marklogic.com
> Phone: +1 650 655 2378
> www.marklogic.com
>
> This e-mail and any accompanying attachments are confidential. The information
> is intended solely for the use of the individual to whom it is addressed. Any
> review, disclosure, copying, distribution, or use of this e-mail communication
> by others is strictly prohibited. If you are not the intended recipient, 
> please
> notify us immediately by returning this message to the sender and delete all
> copies. Thank you for your cooperation.
>
> _______
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general
>
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Backup question

2014-05-31 Thread Wayne Feick
If you are doing a database backup, changing the number of forests 
shouldn't change anything.

The size of a backup is the size of the database itself; our storage 
format is already very compressed, so no additional compression can be 
done as part of the backup. The one difference is that we don't backup 
up the journal files themselves, so there is a little bit of space 
saving there.

You don't mention whether you're doing a one off backup, or a scheduled 
backup. For a scheduled backup, you configure the number of backups you 
want to retain, and need space for N+1 backups because we won't remove 
the oldest one until the current one completes.

Wayne.


On 05/31/2014 07:26 AM, Tim wrote:
> Hi Folks,
>
> I have question about backups. We have a fairly large database (which is
> currently using only one forest) and are running out of space when performing
> backups.  I do not understand the backup process very well and I don't know if
> it is just a matter of running out of space or if it has to do with performing
> concurrent backups and the size of any given forest that is being backed up.
> Could splitting the database up into multiple smaller forests fix the problem?
>
> The error log merely tells me that there is insufficient space available, but 
> I
> don't know how to measure that - in other words is there a rule for calculate
> the necessary disk space based on the database size and any degree of
> compression? Is it impacted by concurrent backups?
>
> Thanks for any help with this!
>
> Tim Meagher
>
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] OpenSSL Heartbleed

2014-04-10 Thread Wayne Feick
I can confirm that we are aware of the issue and working on on a patch. 
We'll announce here once it's available.


On 04/10/2014 11:20 AM, Michael Blakeley wrote:
> The OpenSSL build version is pretty easy to find:
>
> $ strings ~/Library/MarkLogic/lib/libssl.* | grep -i openssl | head -1
> SSLv2 part of OpenSSL 1.0.1e-fips 11 Feb 2013
>
> That's 7.0-2.2 for OSX but I imagine all current releases are the same. With 
> SSL enabled on a test server, https://github.com/titanous/heartbleeder says:
>
> $ heartbleeder localhost:8443
> VULNERABLE(localhost:8443) - has the heartbeat extension enabled and is 
> vulnerable to CVE-2014-0160
>
> No doubt MarkLogic is working on new release.
>
> -- Mike
>
> On 10 Apr 2014, at 10:31 , Sergio Restrepo  
> wrote:
>
>> Hello,
>>
>> I have gotten a couple of requests from some of our customers to check on 
>> heartbleed (http://heartbleed.com/)  vulnerability in several of our 
>> applications.
>>
>> While we do not use HTTPS in most of our services, the documentation 
>> (http://docs.marklogic.com/guide/admin/SSL#id_58562) does state MarkLogic 
>> uses OpenSSL to implement SSL/TLS.
>>
>> Do you have any insight as to what version of OpenSSL is embedded in 
>> MarkLogic and if that is vulnerable to heartbleed?
>>
>> Thanks
>>
>> SERGIO RESTREPO VP, Architecture
>> Yuxi Pacific LLC, 4393 Digital Way Mason, OH 45040
>> sergio.restr...@yuxipacific.com
>> Office:  484-598-3729
>> Skype: yuxi-sergio
>>
>>
>>   
>>
>> 
>>
>> ___
>> General mailing list
>> General@developer.marklogic.com
>> http://developer.marklogic.com/mailman/listinfo/general
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] ML 4.2 backup restore to ML 7.x

2014-03-28 Thread Wayne Feick

Hi Gene,

The "missing journal file" warning won't stop the restore. I'd recommend 
filing a support case and working things through that channel.


Wayne.



On 03/28/2014 02:21 PM, Danny Sokolsky wrote:


Was this done from a MarkLogic database backup (that is what my 
assumption was)?  Is it just that forest that is having trouble 
mounting?  How big is this backup (size and number of forests)?  Are 
you restoring into the exact same topology as you backed up from (that 
is what restore expects)?


-Danny

*From:*Gene Thomas [mailto:thomg...@att.net]
*Sent:* Friday, March 28, 2014 2:16 PM
*To:* Danny Sokolsky; MarkLogic Developer Discussion
*Subject:* Re: [MarkLogic Dev General] ML 4.2 backup restore to ML 7.x

Thank you Danny,

That is what I am trying to do.

Here is the text of an email I sent internally about it.

We have an NAS mount connected to both clusters.  On the ML 4.2 side 
it is read/write.


On the ML 7.x side it is mounted read only.

I have not been able to complete a restore from that location either 
by database restore or forest restore.


The access to the /Marklogic_temp/ mount may need to be read write. 
 When the actual restore process gets to 00:00:00 estimated time left, 
it just sits there with the forest state saying "recovering"


I see this message in the logs at that point:

Mar 28 16:01:02  MarkLogic: Missing journal file 
/mldata02/RestoredForests/dodge-daily-forest01/Journals/Journal1-1970010100-0-0-0


I have waited 30 minutes past the time it said it had finished copying 
data and still no completion of the restore.


Any help would be appreciated as we are supposed to perform the actual 
process from ML 4.2 QA to ML 7.x QA over the weekend.


*Gene*



*From:*Danny Sokolsky 
*To:* Gene Thomas ; MarkLogic Developer Discussion 


*Sent:* Friday, March 28, 2014 2:02 PM
*Subject:* RE: [MarkLogic Dev General] ML 4.2 backup restore to ML 7.x

I can't say that I have done this, but I would expect it to work.  I 
would expect that, as soon as it mounted the db in 7, that it would 
start reindexing (if reindexing is enabled).


I would test it first, but I think it will work.

-Danny

*From:*general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of *Gene 
Thomas

*Sent:* Friday, March 28, 2014 1:59 PM
*To:* general@developer.marklogic.com
*Subject:* [MarkLogic Dev General] ML 4.2 backup restore to ML 7.x

Has anyone successfully backed up a database on ML 4.2-7 and restored 
it in a new environment with a fresh ML 7.0-2.1 install?


*Gene*



___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Regarding updating index in Master replica DB

2014-03-20 Thread Wayne Feick

Hi Guys,

As of release 7, there is an enable flag in the configuration on the 
master side. If you set it to true, replication will cease until you 
turn it back on again.


If you do a large reindex with replication disabled, you will likely end 
up doing a bulk resynchronization when you enable replication again 
(i.e. your replica database will not be able to serve queries until it 
resynchronizes).


Wayne.


On 03/20/2014 02:32 PM, Gene Thomas wrote:

Raghu,

If you are using database replication you should apply you index 
changes in the Replica Database before applying them in the Master 
database.
There is no way that I know to turn off database replication or delay 
it apart from deleting the replication entries on the Master.

Gene
MarkLogic DBA Team Lead - Atos

*From:* Raghu 
*To:* General MarkLogic Developer Discussion 


*Sent:* Thursday, March 20, 2014 11:55 AM
*Subject:* [MarkLogic Dev General] Regarding updating index in Master 
replica DB


Hi All,

   My situation is that my customers are using the master DB and I 
have to add an index without taking down the DB (which has a foreign 
replica).


If I add an element range index in a Marklogic master DB with 
replication turned on with a foreign replica DB. Will the master DB 
start reindexing,refragmenting, merging and replication 
simultaneously? Is there a way to delay the replication till the 
reindexing merging stuff etc is complete without turning the 
replication off?


Thanks
Raghu

___
General mailing list
General@developer.marklogic.com <mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general




___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Failed over Security Database

2014-03-18 Thread Wayne Feick

Agreed. Something simple would be better than nothing at all.


On 03/18/2014 02:47 PM, Geert Josten wrote:


A pretty dumb mechanism would already help a lot. Always better to 
have at least hosts 2 and 3 act as fail-over, than none.. ;-)


If you want to allow users to change it anyhow, a crude mechanism 
would serve as jump start, and basic fail-over..


Cheers

*Van:*general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *Namens *Wayne Feick

*Verzonden:* dinsdag 18 maart 2014 22:27
*Aan:* general@developer.marklogic.com
*Onderwerp:* Re: [MarkLogic Dev General] Failed over Security Database

Hi Geert,

Doing something more intelligent and automatic to improve HA is 
something that has come up internally as well, not just for Security, 
but also the other auxiliary / default databases. It's something we'd 
like to do better on.


There is a little bit of complexity around which hosts handle the 
replica forests in different deployment situations. In a private data 
center, it might not matter which host handles the replica forests 
since all the hosts are in the same rack. For AWS, you want to put 
them in a different availability zone.


We added a "zone" descriptor in ML7, and that seems like a good input 
for us to use in deciding which hosts should handle the replica 
forests. On AWS, it gets set automatically for you. In a private data 
center, you'd need to manually set it according to your topology (e.g. 
which rack it's in).


Ideally, I'd like these replica forests to just quietly happen when a 
cluster is initially formed, and users can adjust to their liking 
later if desired.


Wayne.


On 03/18/2014 02:04 PM, Geert Josten wrote:

Yes and no..

By default installation, all hosts start with their own Security
database (, and Modules, and Schemas, and Triggers, and Apps,
etc). But as soon as they join an existing cluster, they seem to
forget they have forests, and databases of their own.

The issue is of course that you cannot have two databases with the
same name, so there is a logical explanation for why it currently
is this way. But I am just wondering whether it is possible to
make it slightly smarter..

I am only thinking about the Security database, just because it is
such a vital database..

Cheers

*Van:*general-boun...@developer.marklogic.com
<mailto:general-boun...@developer.marklogic.com>
[mailto:general-boun...@developer.marklogic.com] *Namens *David Lee
*Verzonden:* dinsdag 18 maart 2014 21:31
*Aan:* MarkLogic Developer Discussion
*Onderwerp:* Re: [MarkLogic Dev General] Failed over Security Database

" Unless someone deliberately messed up, all hosts have a Security
database."

Unless you explicitly set it up differently, the Security database
that all hosts have

in a cluster is *the same database* (A single forest on the first
host installed in the cluster).

**

**

*From:*general-boun...@developer.marklogic.com
<mailto:general-boun...@developer.marklogic.com>
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of
*Geert Josten
*Sent:* Tuesday, March 18, 2014 4:19 PM
*To:* 'MarkLogic Developer Discussion'
*Subject:* Re: [MarkLogic Dev General] Failed over Security Database

Nice.. :)

I'm a little surprised though that Security needs explicit measure
to make it fail over in a cluster. It is pretty vital. Unless
someone deliberately messed up, all hosts have a Security
database. Couldn't it replicate that database across all other
hosts by default, just like all other server configs are being
shared automatically?

Cheers

*Van:*general-boun...@developer.marklogic.com
<mailto:general-boun...@developer.marklogic.com>
[mailto:general-boun...@developer.marklogic.com] *Namens *Danny
Sokolsky
*Verzonden:* dinsdag 18 maart 2014 20:07
*Aan:* MarkLogic Developer Discussion
*Onderwerp:* Re: [MarkLogic Dev General] Failed over Security Database

And as my colleague Dave pointed out to me, if you are using 7,
you can do this same task easier by adding a new forest (with a
public data directory) and retiring the old:

http://docs.marklogic.com/guide/admin/database-rebalancing#id_23094

Similarly, you can use tiered storage to migrate the forest.

-Danny

*From:*general-boun...@developer.marklogic.com
<mailto:general-boun...@developer.marklogic.com>
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of
*Danny Sokolsky
*Sent:* Monday, March 17, 2014 11:01 PM
*To:* MarkLogic Developer Discussion
*Subject:* Re: [MarkLogic Dev General] Failed over Security Database

In order to set up failover on a forest, it must be in a directory
other than the default directory.  The default directory
(/opt/Mar

Re: [MarkLogic Dev General] Failed over Security Database

2014-03-18 Thread Wayne Feick
y" 
database and getting exception


Invalid input: Failover is not allowed for private forest: Security


Please advise
Abhishek Srivastav
Tata Consultancy Services
Mailto: abhishek5...@tcs.com <mailto:abhishek5...@tcs.com>
Website: http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting


=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you



___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Database status

2014-01-20 Thread Wayne Feick
If you're using local-disk replication and calling xdmp:forest-status() 
or xdmp:forest-counts() directly, you'll also want to make sure you're 
querying the appropriate replica forest when a failover happens.


   xdmp:forest-status(
  xdmp:forest-open-replica(
xdmp:database-forests(
  $database-id)))

For normal development APIs that make use of forest IDs (e.g. 
xdmp:document-insert(), cts:search()), you specify the master forest IDs 
and we'll map them under the covers as needed when failed over.


For administrative functions (xdmp:forest-status(), 
xdmp:forest-counts(), xdmp:forest-restart()) you specify the exact 
forest ID that you're accessing.


Wayne.


On 01/20/2014 04:44 AM, Geert J. wrote:


Hi Manoj,

You could do xdmp:estimate(collection()) if you are only interested in 
number of documents. If you want to know more, then I'm afraid you 
will need to accumulate that yourself, by iterating through 
xdmp:database-forests 
<http://docs.marklogic.com/xdmp:database-forests>()..


Kind regards,

Geert

*Van:*general-boun...@developer.marklogic.com 
<mailto:general-boun...@developer.marklogic.com> 
[mailto:general-boun...@developer.marklogic.com 
<mailto:general-boun...@developer.marklogic.com>] *Namens *manoj 
viswanadha

*Verzonden:* maandag 20 januari 2014 13:31
*Aan:* MarkLogic Developer Discussion
*Onderwerp:* Re: [MarkLogic Dev General] Database status

Hi Geert,

Thanks for your quick reply. I have tried xdmp:forest-status but i 
could not find the count of documents stored in a particular database.


Is there any other function which gives that info.

Thanks,

Manoj.

On Mon, Jan 20, 2014 at 5:23 PM, Geert J. <mailto:geert.jos...@dayon.nl>> wrote:


Hi Manoj,

I think you are looking for 
xdmp:forest-status(),http://docs.marklogic.com/xdmp:forest-status


Kind regards,

Geert

*Van:*general-boun...@developer.marklogic.com 
<mailto:general-boun...@developer.marklogic.com> 
[mailto:general-boun...@developer.marklogic.com 
<mailto:general-boun...@developer.marklogic.com>] *Namens *manoj 
viswanadha

*Verzonden:* maandag 20 januari 2014 12:10
*Aan:* MarkLogic Developer Discussion
*Onderwerp:* [MarkLogic Dev General] Database status

Hi Guys,

Can anyone help me in getting the database status in marklogic.

I can view the status from admin screen where i can see document size, 
no of fragments and other properties.


Is there any function in ML where i can get all the properties?

Thanks,

Manoj Viswanadha.


___
General mailing list
General@developer.marklogic.com <mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general



___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Document Level Authorization (Roles and Users)

2013-12-11 Thread Wayne Feick
The performance of large updates is improved in both ML6 (I forget 
exactly which maintenance release), and ML7. We pass around much less 
information for distributed deadlock detection than earlier releases.

Wayne.



On 12/11/2013 11:37 AM, Christopher Cieslinski wrote:
> We do have some experience with updating millions of documents, 
> though, we have had to do that numerous times. We don’t have 
> super-huge data, so the biggest updates we have run have only been on 
> around 60 million documents. We had to do the updates in batches of 
> around 1500 documents each (we just spawned tasks to the task server). 
> We had 16 threads configured that were processing this on an e-node (8 
> cores, 32GB of RAM, I think). This process took around 9 hours to 
> complete. We could have potentially had another e-node or two also 
> processing, as long as our d-nodes could have kept up, and that could 
> have cut the time down. 

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] dls version

2013-10-01 Thread Wayne Feick

Hi Rushabh,

Check out the documentation for dls:retention-rule(), and 
dls:retention-rule-insert().


Older versions are retained if there is a retention rule that matches 
them. Since you don't have any retention rules, nothing is being retained.


The docs have some simple examples that you can customize to meet your 
needs.


Wayne.



On 10/01/2013 12:31 AM, Metha, Rushabh wrote:

Hi All,

I am doing some testing on managed doc.
/test/abc.doc
/test/abc_doc_versions/1-abc.doc


When doing dls:document-update I observe that a new version gets 
created but the old version is getting deleted
(although i am able to see the uri of old version but cant access the 
old version)


/test/abc.doc
/test/abc_doc_versions/1-abc.doc
/test/abc_doc_versions/2-abc.doc

Now I am not able to access /test/abc_doc_versions/1-abc.doc.

I am doing dls:document-update with retain-history as true and have no 
retention-rules as of now.

Could you please advise what is the issue.

Thanks,
Rushabh Mehta
Call:(W) +91 40 66674084

This message w/attachments (message) is intended solely for the use of 
the intended recipient(s) and may contain information that is 
privileged, confidential or proprietary. If you are not an intended 
recipient, please notify the sender, and then please delete and 
destroy all copies and attachments, and be advised that any review or 
dissemination of, or the taking of any action in reliance on, the 
information contained in or attached to this message is prohibited.
Unless specifically indicated, this message is not an offer to sell or 
a solicitation of any investment products or other financial product 
or service, an official confirmation of any transaction, or an 
official statement of Sender. Subject to applicable law, Sender may 
intercept, monitor, review and retain e-communications (EC) traveling 
through its networks/systems and may produce any such EC to 
regulators, law enforcement, in litigation and as required by law.
The laws of the country of each sender/recipient may impact the 
handling of EC, and EC may be archived, supervised and produced in 
countries other than the country in which you are located. This 
message cannot be guaranteed to be secure or free of errors or 
viruses. Attachments that are part of this EC may have additional 
important disclosures and disclaimers, which you should read. By 
messaging with Sender you consent to the foregoing.



___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Programmatically determining forest failover

2013-04-16 Thread Wayne Feick

Take a look at xdmp:forest-open-replica().

   http://docs.marklogic.com/xdmp:forest-open-replica

It's a lower overhead way to map master forest ids to failed over 
replica forest ids if you're failed over.


Wayne.


On 04/16/2013 06:40 AM, Whitby, Rob wrote:

I use this function to check for failed-over forests

declare private function active-replica-forests() as xs:unsignedLong*
{
   for $f in xdmp:forests()
   let $status := xdmp:forest-status($f)
   where
 $f eq xs:unsignedLong($status/forest:master-forest) and
 $f ne xs:unsignedLong($status/forest:current-master-forest)
   return xs:unsignedLong($status/forest:current-master-forest)
};



On 16 Apr 2013, at 14:24, Danny Sinang  wrote:


Hi,

Is there a way to programmatically determine if a forest has failed over to a 
replica ?

I see that xdmp:forest-status returns a current-master-forest element, but I 
haven't tested it yet in a failover situation.

Perhaps someone here has actual experience with it or an alternative means.

Regards,
Danny


___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Speeding-up forest replica rebuilding

2013-04-03 Thread Wayne Feick
If you do a backup of the master and restore to the replica, bulk 
synchronization will go much faster since it compares the set of 
fragments in each forest and only replicates the differences.


Wayne.



On 04/02/2013 06:35 PM, Danny Sinang wrote:

Hello,

Is there an ML config setting I can tweak to speed up the rebuilding / 
catch-up of forest replicas.


Regards,
Danny



___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Alerting system

2013-01-17 Thread Wayne Feick
Hi Gurbeer,

Both the reverse index and the XQuery alerting API do check for the 
license option, so there isn't much headway you can make without a 
proper license. If you talk to sales or support, they should be able to 
help you get unblocked so you can experiment with alerting.

Wayne.


On 01/17/2013 09:42 AM, Michael Blakeley wrote:
> You don't need an extra license option for alerting, but you need an extra 
> license option for reverse-index. So the functionality will work, but the 
> performance may suffer. From reading your use-case I think you would benefit 
> from the reverse-index.
>
> However you could start development without it, storing your alerts as proper 
> cts:query documents in the database. You could then relicense the server and 
> activate the reverse-index whenever you have the budget and/or feel the need.
>
> -- Mike
>
> On 17 Jan 2013, at 09:17 , "Singh, Gurbeer"  
> wrote:
>
>> Hi
>>   
>> I want to User Alert System.
>>   
>> Requirement Is, User will subscribe himself for “CAR” Text.
>> Any Document submitted with CAR related, User will get notification about 
>> new Submission with link to New Submission.
>>   
>> How I plan to do this
>> Simple way is, Crate a XML file with User email Id & subscribe TEXT.
>> Whenever  a document is injected in ML data base, fire a Query with 
>> subscribe TEXT on ML data base, if result contains injected document, send 
>> email to user with link to document.
>>   
>> Issue
>> If there is 1000 user subscribe to 1000 different text, it will become a 
>> performance issue to do this operation after every injection.
>>   
>>   
>> I found ML support Alerting system. We have a licensed version Marklogic 
>> (5.0) but still reverse query is disabled. While reading “Search User guide 
>> Creating Alerting application” I found first point
>> “Alerting applications require a valid alerting license key. The license key 
>> is required to use the reverse index and to use the Alerting API.”
>> Do we need extra license to enable Alerting
>>   
>>   
>> ~Gurbeer
>>   
>>
>>
>>
>> NOTICE: Morgan Stanley is not acting as a municipal advisor and the opinions 
>> or views contained herein are not intended to be, and do not constitute, 
>> advice within the meaning of Section 975 of the Dodd-Frank Wall Street 
>> Reform and Consumer Protection Act. If you have received this communication 
>> in error, please destroy all electronic and paper copies and notify the 
>> sender immediately. Mistransmission is not intended to waive confidentiality 
>> or privilege. Morgan Stanley reserves the right, to the extent permitted 
>> under applicable law, to monitor electronic communications. This message is 
>> subject to terms available at the following link: 
>> http://www.morganstanley.com/disclaimers If you cannot access these links, 
>> please notify us by reply message and we will send the contents to you. By 
>> messaging with Morgan Stanley you consent to the foregoing.
>>
>>
>>
>> ___
>> General mailing list
>> General@developer.marklogic.com
>> http://developer.marklogic.com/mailman/listinfo/general
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] MarkLogic in AWS Cloud

2013-01-08 Thread Wayne Feick
I don't have a lot of experience with it, but EBS volumes have limited 
bandwidth. Some people have had success striping across multiple EBS volumes 
from within Linux instances. You could also look at the more recent guaranteed 
IOPs capability Amazon now offers.

Wayne

Ron Hitchens  wrote:


   Has anyone had any experience configuring and running non-trivial
MarkLogic clusters in the cloud?  Specifically Amazon EC2 VMs?

   I've got a test cluster of three nodes setup in AWS and am trying
to figure out the best configuration for it.  The system seems to be
quite slow at some things, but reasonably fast at others.  Bumping
the VM up to bigger instances (more ram, more cores) doesn't seem to
have a significant impact on speed or throughput.

   I suspect I/O bandwidth may be the culprit, but that's just a
hunch.  Does anyone have any experience with tuning EC2 VMs?

   The test environment I'm working with now is three m2.xlarge
instances (32gb RAM, 4 cores, "high" network speed).  The OS is
Windows (groan, I don't have a choice there).  Production cluster(s)
are likely to be similar, but probably six nodes or so.

   Any advice//war stories/dire warnings greatly appreciated.

   Thanks.

---
Ron Hitchens {mailto:r...@ronsoft.com}   Ronsoft Technologies
 +44 7879 358 212 (voice)  http://www.ronsoft.com
 +1 707 924 3878 (fax)  Bit Twiddling At Its Finest
"No amount of belief establishes any fact." -Unknown




___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general
___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Option to not hit cache

2012-08-22 Thread Wayne Feick
No, there isn't. Your best bet would be to stop the server, clear OS caching 
from memory (e.g. unmount and remount the filesystem), and then restart the 
server again.

Wayne

Danny Sinang  wrote:



Hi,

Is there an option in xquery, cts:search, or search:search telling ML not to 
fetch results from the cache ?

We need to test the performance of some queries and we want to see its actual 
speed not coming from the cache.

Regards,
Danny

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Local-disk forest failover

2012-08-16 Thread Wayne Feick
A slight refinement; replicas are equivalent copies rather than exact copies. 
You'll have the same fragments, but likely organized differently into stands.

Wayne


Michael Blakeley  wrote:


No on both questions. Forest replication is just like RAID-1. You set up the 
mirrors, and they are exact copies.

This underscores the importance of monitoring. You want to find out about a 
forest failure immediately - not weeks later, when the replica fails and the 
whole database goes offline.

-- Mike

On 16 Aug 2012, at 08:00 , Danny Sinang wrote:

> Hi,
>
> When a local-disk failover happens, will the replica forest (which just 
> became the primary forest) need to reindex ?
>
> Also, will the surviving cluster node try to replicate the new forest to some 
> other cluster members  ?
>
> Regards,
> Danny
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general
___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] XDMP-RMRECORD : Journal record too large

2012-08-10 Thread Wayne Feick
A change in the journal size doesn't take effect until you reboot the 
server.


On 08/10/2012 02:15 PM, Danny Sinang wrote:
I tried raising journal size to 512 for the Fab, Documents, and my 
other database. Am still getting :


 XDMP-RMRECORD: xdmp:invoke("/Xplana/queue.xqy", (fn:QName("", 
"flowName"), 
fn:doc("/validations/1371.xml")/validation/flowName/text(), 
fn:QName("", "fileBinary"), ...)) -- Journal record too large: 
journalSize=256MB, recordSize=402MB


Regards,
Danny

On Fri, Aug 10, 2012 at 4:31 PM, Danny Sokolsky 
mailto:danny.sokol...@marklogic.com>> 
wrote:


I think it means you are trying to insert something that is larger
than your journal size, which seems to be 256MB.  You can try
making your journals bigger.  That will take up more disk space,
but allow you to have larger journals.  You might try raising it
to 512 or 1024 MB.

-Danny

*From:*general-boun...@developer.marklogic.com
<mailto:general-boun...@developer.marklogic.com>
[mailto:general-boun...@developer.marklogic.com
<mailto:general-boun...@developer.marklogic.com>] *On Behalf Of
*Danny Sinang
*Sent:* Friday, August 10, 2012 1:25 PM
*To:* general
*Subject:* [MarkLogic Dev General] XDMP-RMRECORD : Journal record
too large

Hi,

I'm running into a "Journal record too large" error. See below.

Any idea what that means and what I should do ?

Regards,

Danny

TaskServer: *XDMP-RMRECORD*:
*xdmp:invoke*("/Xplana/queue.xqy", (fn:QName("", "flowName"),
fn:doc("/validations/1371.xml")/validation/flowName/text(),
fn:QName("", "fileBinary"), ...)) -- Journal record too large:
journalSize=256MB, recordSize=402MB


___
General mailing list
    General@developer.marklogic.com
<mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general




--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Restore backup to newer version?

2012-07-24 Thread Wayne Feick
Yes, generally speaking we support restoring from an older release to a 
newer release.


On 07/24/2012 02:07 PM, Gary Larsen wrote:


Hi,

I need to move a 4.2 ML installation to a new server and will copy the 
database using a backup.  Does anyone know if a 4.2 backup can be 
restored to a 5.0 database?  Both would be configured the same.


Thanks,

Gary



___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] US-Europe Cluster Configuration

2012-07-19 Thread Wayne Feick
To be clear, you're planning on building a single cluster (i.e. all 
machines share the exact same configuration files, security database, 
etc.) that spans multiple physical locations on different continents? I 
had thought you were talking about database replication before, which 
was built to replicate between two different clusters (i.e. different 
config files, security databases, etc. for each cluster).


I would recommend engaging our professional services if you're thinking 
of going down that road. I don't know of anyone who has built such a 
beast, and there are real drawbacks I can think of off the top of my head.


1. Host failover will not happen unless a host is part of a surviving
   quorum of N/2+1 hosts. If you have two datacenters with the exact
   same number of hosts in each, neither can tell if it is part of a
   quorum so no failover would happen. You would need at least one host
   somewhere else, say in a 3rd datacenter, to achieve a quorum if a
   data center goes down.
2. Local-disk replication is synchronous, which I expect would greatly
   slow your update rate due to waiting around for a reply from a far
   away cluster. No prepare or commit operation would proceed until
   there has been a complete intercontinental round trip.
3. Queries would always be to whichever forest is currently open as a
   master. If you have a master plus two replicas, the ordering in the
   config file controls which forest would open first. So for example,
   ForestA-US-Master would first list ForestA-US-Replica and then
   ForestA-UK-Replica if you want the US replica to take over before
   the UK replica.
4. If you lose connectivity between continents, whichever data center
   hosts your security database would survive, but the other data
   center(s) would go offline due to the security database being
   unavailable.

I think you'd have a more resilient system if you have a separate 
cluster in each data center, and use database replication between them. 
You can replicate your security database from one cluster to the other 
if you like, and the replica cluster will query it at a slight lag.


Wayne.


On 07/19/2012 11:28 AM, Danny Sinang wrote:

Hi Wayne,

I was planning to use local-disk failover also between the US and EU 
servers.


Question is, since all US and EU servers will be in the same cluster, 
which failover host will automatically take over first ?


I'm hoping a US-based failover host will take over first if a US-based 
 forest goes down.


Regards,
Danny

On Thu, Jul 19, 2012 at 1:52 PM, Wayne Feick 
mailto:wayne.fe...@marklogic.com>> wrote:


With local-disk failover, replica forests will automatically take
over when hosts in the same cluster fail.

With database replication, there is no automatic failover when the
entire master cluster goes down. If you want automatic failover,
you have to build that up yourself since it involves more than
just your MarkLogic instance (i.e. a failed data center likely
means non-MarkLogic servers are also failing over).

The documentation discusses the issues you need to take into
consideration, but I'll provide a quick overview here.

Database replication is performed independently and asynchronously
between each master/replica forest pair (as opposed to
synchronously for local-disk failover that you are also using). As
a result, when an active master cluster fails you potentially have
a situation where one participant forest has replicated its
portion of a committed transaction but another participant forest
has not.

MarkLogic server runs queries at a particular point in time (i.e
commit timestamp), and when running against a replica database it
automatically determines the most current timestamp a query can
use based on how up to date each forest is. Queries see a
consistent replica database at a slight lag behind the master
database.

If you're just going to query the replica database when the master
cluster goes down, you don't need to do anything. The server will
automatically run queries at the most recent commit timestamp it
can until the master cluster becomes available again.

If you decide to do updates on the replica database, you run into
the issue of partially replicated transactions. You can use
xdmp:forest-status() to see how up to date each forest is, choose
the minimum commit timestamp across all forests, deconfigure
replication on the replica database, and do a rollback to that
timestamp. This ensures all forests in the database are consistent
to that point in time, but potentially drops a few transactions
that had been committed on the master but not yet fully replicated.

Later, when your failed master database comes back up, you
configure it as a replica of the database you promoted to master
and just the differences will be repl

Re: [MarkLogic Dev General] US-Europe Cluster Configuration

2012-07-19 Thread Wayne Feick
With local-disk failover, replica forests will automatically take over 
when hosts in the same cluster fail.


With database replication, there is no automatic failover when the 
entire master cluster goes down. If you want automatic failover, you 
have to build that up yourself since it involves more than just your 
MarkLogic instance (i.e. a failed data center likely means non-MarkLogic 
servers are also failing over).


The documentation discusses the issues you need to take into 
consideration, but I'll provide a quick overview here.


Database replication is performed independently and asynchronously 
between each master/replica forest pair (as opposed to synchronously for 
local-disk failover that you are also using). As a result, when an 
active master cluster fails you potentially have a situation where one 
participant forest has replicated its portion of a committed transaction 
but another participant forest has not.


MarkLogic server runs queries at a particular point in time (i.e commit 
timestamp), and when running against a replica database it automatically 
determines the most current timestamp a query can use based on how up to 
date each forest is. Queries see a consistent replica database at a 
slight lag behind the master database.


If you're just going to query the replica database when the master 
cluster goes down, you don't need to do anything. The server will 
automatically run queries at the most recent commit timestamp it can 
until the master cluster becomes available again.


If you decide to do updates on the replica database, you run into the 
issue of partially replicated transactions. You can use 
xdmp:forest-status() to see how up to date each forest is, choose the 
minimum commit timestamp across all forests, deconfigure replication on 
the replica database, and do a rollback to that timestamp. This ensures 
all forests in the database are consistent to that point in time, but 
potentially drops a few transactions that had been committed on the 
master but not yet fully replicated.


Later, when your failed master database comes back up, you configure it 
as a replica of the database you promoted to master and just the 
differences will be replicated back.


Wayne.




On 07/19/2012 09:54 AM, Danny Sinang wrote:

Hi Wayne,

If I am replicating the forests of US1 to US2 and EU2, what happens 
when US1 goes down ?


Which server will take over and server the US1 forests ?  US2 or EU2 ?

Regards,
Danny

On Wed, Jul 18, 2012 at 7:16 PM, Wayne Feick 
mailto:wayne.fe...@marklogic.com>> wrote:


Yes, this should work fine.

You'll have a master US-Database in the US, and a master
UK-Database in the UK, with each replicating to the site.
Local-disk failover works fine at both ends for both master and
replica databases.

Extending to a third site should also be fine, just keep in mind
that a network brown out to either of the replica sites will
degrade your foreground performance in order to enforce the lag limit.

Wayne.


On 07/18/2012 12:40 PM, Danny Sinang wrote:

Hi,

We currently have a 3-node ML cluster here in the US (let's call
them US1, US2, US3), with forest replication and failover enabled.

Should we need to expand to Europe, would the setup below achieve :

1. Traffic Localization (during normal operations)
2. Continued ML availability (in the event we ever need to
bring one cluster down for hardware or software upgrades / fixes)


?

*Draft EU Expansion Plan*

1. Set up another 3-node ML cluster in Europe (EU1, EU2,
EU3), with forest replication and failover enabled.

2. Also replicate the forests of the US cluster to the EU
clusters and vice versa

3. Direct US customers (via some geo DNS) to US webservers
which use US1, US2, and US3
 - This would save US customer data to the forests on
US1, US2, and US3

4. Direct EU customers to EU webservers which use EU1, EU2,
and EU3
 - This would save EU customer data to the forests on
EU1, EU2, and EU3

5. Should the US ML Cluster ever go down, point the US
websevers to the EU ML Cluster
 - I'm hoping this would activate and make available
the data from US1, US2, and US3 on EU1, EU2, and EU3. Am I
correct ?
 - Same thing should happen the other way around
(i.e. if EU ML cluster goes down, point EU webservers to US
ML cluster)


Do you think this would work ?

Is there a better way to achieve our goals ?

How do we extend this model should the time come for us to expand
to Asia ?

Regards,
    Danny





-- 
Wayne Feick

Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com  <mailto:wayne.fe...@marklogic.com>
Phone:+1 650 655 2378  
www.marklogic

Re: [MarkLogic Dev General] US-Europe Cluster Configuration

2012-07-18 Thread Wayne Feick

Yes, this should work fine.

You'll have a master US-Database in the US, and a master UK-Database in 
the UK, with each replicating to the site. Local-disk failover works 
fine at both ends for both master and replica databases.


Extending to a third site should also be fine, just keep in mind that a 
network brown out to either of the replica sites will degrade your 
foreground performance in order to enforce the lag limit.


Wayne.


On 07/18/2012 12:40 PM, Danny Sinang wrote:

Hi,

We currently have a 3-node ML cluster here in the US (let's call them 
US1, US2, US3), with forest replication and failover enabled.


Should we need to expand to Europe, would the setup below achieve :

1. Traffic Localization (during normal operations)
2. Continued ML availability (in the event we ever need to bring
one cluster down for hardware or software upgrades / fixes)


?

*Draft EU Expansion Plan*

1. Set up another 3-node ML cluster in Europe (EU1, EU2, EU3),
with forest replication and failover enabled.

2. Also replicate the forests of the US cluster to the EU clusters
and vice versa

3. Direct US customers (via some geo DNS) to US webservers which
use US1, US2, and US3
 - This would save US customer data to the forests on US1,
US2, and US3

4. Direct EU customers to EU webservers which use EU1, EU2, and EU3
 - This would save EU customer data to the forests on EU1,
EU2, and EU3

5. Should the US ML Cluster ever go down, point the US websevers
to the EU ML Cluster
 - I'm hoping this would activate and make available the
data from US1, US2, and US3 on EU1, EU2, and EU3. Am I correct ?
 - Same thing should happen the other way around (i.e. if
EU ML cluster goes down, point EU webservers to US ML cluster)


Do you think this would work ?

Is there a better way to achieve our goals ?

How do we extend this model should the time come for us to expand to 
Asia ?


Regards,
Danny





--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.



___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Possible to coordinate an update across DBs to commit at the same time?

2012-04-18 Thread Wayne Feick

Hi Ryan,

First off, I'll assume that A, B, and Modules can't be combined into a 
single database since that would be the simplest answer.


Are you using 5.0? You might try using XA from Java to do the updates so 
that you can commit them as a single distributed transaction. They won't 
all get the same commit timestamp, but they'll be very close together.


Re: running at a particular timestamp, that only works for queries, not 
updates.


Re: swapping around the modules root, the config changes aren't 
synchronous across the cluster so the change will take effect at 
slightly different points in time on each host. I'm not sure if that 
would be an issue.


You could use a URL rewriter to detect an in-progress upgrade and stall 
until it's complete. You could use the normal URI locking system to 
enforce the stall by having the rewriter run as an update and take a 
read lock on some upgrade document. You'd then just need to have the 
upgrade take a write lock on the document so that any rewriter instances 
stall until the upgrade is complete.


This approach wouldn't work if you're using triggers, CPF, eval/invoke 
in a separate transaction, or spawn.


Wayne.


On 04/18/2012 07:36 PM, seme...@hotmail.com wrote:
Say I have three databases: A, B, and Modules. I want to be able to 
deploy changes to all three and have the changes all commit (or become 
active) at the same time.


I may have new config files that only work with the new code going 
into Modules so I don't want to put the config files in A and then 
afterwards insert the new code in Modules because there may be a gap 
in time where the config files and the code are not the right versions 
for each other. I don't want to have any downtime so stopping the 
server is not really any option. How can I insert my new files across 
the DBs and have the changes take place simultaneously?


Perhaps I can have the app run at a particular timestamp and then 
insert all the new files and then remove the run-at timestamp. I'm not 
sure this works for the Modules DB though.


Another option may be to put version info in the XML files in database 
A, then write the new code in the Modules DB under a new directory and 
write the code to only use the files in A with the new version, then 
change the modules root of the app server to the newer code dir (thus 
automatically only using the new version of files in A).


I'll take "within a few microseconds" if "simultaneous" isn't really 
possible. Anyone know how to do this?


Thanks!

-Ryan


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Secure Application-to-Application communication

2012-03-18 Thread Wayne Feick

Hi Ryan,

It's fairly easy to do what you want (at least from the point of view of 
the guy who did all the SSL work in our server ;-) ).


1. You'll need to have the certificate authority's certificate in the 
security database so that it is trusted. You can import it via the Admin 
UI at Security -> Certificate Authorities -> Import. You can also use 
pki:insert-trusted-certificates() running against the security database.


2. Configure the app server to use SSL. You said you already can do 
this, so I'll not say too much about this other than the important bit 
for this use case.


You're already familiar with the selection of an SSL certificate 
template at the bottom of the app server configuration page in the Admin 
UI. Below that there is true/false configuration for "ssl require client 
certificate". Make sure it's set to true. Just below that is "ssl client 
certificate authorities". Click on "Show" and you'll see all the trusted 
certificate authority organizations. Find the one you just added, click 
on it, and select the authority you added.


This causes the app server to request a client certificate signed by the 
selected authority (you can select multiple authorities if you wish) and 
the "ssl require client certificate = true" setting means it will reject 
any requests that don't provide a client certificate.


You can also use the admin API for this: 
admin:appserver-set-ssl-require-client-certificate(), 
admin:appserver-set-ssl-client-certificate-authorities(), using 
information from pki:get-trusted-certificate-ids() and 
pki:get-certificates().


3. The XQuery client requests need to specify their client certificates. 
This is done through the options node described in the xdmp:http-get() 
documentation.


   xdmp:http-get(
  "https://srvr.acme.com/";,
  
{$pem-encoded-client-certificate}
{$pem-encoded-client-certificate-private-key}
{$pass-phrase-for-client-key-if-needed}
  )

4. On the server side, you may want to inspect the client certificate 
and validate it. I usually do this in a URL rewriter so it is applied to 
every page.


   let $pem-encoded-client-certificate as xs:string? :=
  xdmp:get-request-client-certificate()
   let $xml-form-of-client-certificate :=
  xdmp:x509-certificate-extract($pem-encode-client-certificate)

5. If you revoke any certificates for the authority, you can use 
pki:insert-certificate-revocation-list() against your security database. 
It stores either a PEM or DER encoded certificate revocation list into 
the database corresponding to the URL.


We've also used this pattern to automatically log people in based on a 
client certificate; you just need to match information from the 
certificate to a user in the security database.


Let me know if you have any other questions.

Wayne.


On 03/17/2012 05:42 PM, seme...@hotmail.com wrote:
I am looking to set up web services on an app server in one MarkLogic 
cluster that will be called by another app server in a different 
MarkLogic cluster. I would like to set it up so that the servers are 
configured to only accept connections from each other.


The connections will not be ad hoc so I would prefer to install certs 
or public keys for all apps on all the clusters. I would rather not 
have to log into the remote cluster all the time but let the servers 
trust the connections to the other servers, and let each server handle 
it's own user authentication, but yet have a trusted connections to 
remote servers.


The communication will be going "out in the wild" so I can't secure 
the networking connection (as with a VPN) between the servers so I'll 
need to use SSL for the protocol. This does not need to be an 
extremely fast connection because it's more of a command and control 
scenario, and each cluster will operate independently from each other 
and just periodically pass data and commands back and forth. The web 
service is what exposes the interaction between them, and not anything 
lower level like data replication.


So my questions are:

1. How do I set up one App Server (listening for web service requests) 
to only accept requests from previously configured remote clients and 
which are using the correct certs\keys?


2. How do I code the client side call in XQuery to pass the 
appropriate certs\key info to the other server and reject the 
connection if the server has the wrong certs\keys?


I know how to set up SSL on a server when a browser is involved, but 
I'm not real clear how to do this when another MarkLogic app server is 
involved as the client. I tried setting something up but both the 
server and client seem to accept any connection and any certs so I 
don't think I'm doing it securely enough.


thanks,
-Ryan


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone:

Re: [MarkLogic Dev General] Is this a security problem? Chrome thinks so

2012-02-29 Thread Wayne Feick

I should probably clarify what I meant by "supported browsers".

I was thinking in terms of what our Admin UI supports (i.e. what we test 
that particular application server with, and the context where we 
encountered this bug).


In the context of any particular customer's web app, any browser or 
program than can speak HTTP is supported with our server.


Beyond that, it's up to a customer to ensure their particular app works 
appropriately with the HTML features and browsers they are using.


Wayne.


On 02/29/2012 01:12 PM, Steve Spigarelli wrote:

We're running ML 4.2-4 where we're having this problem manifest.

On Wed, Feb 29, 2012 at 12:50 PM, Wayne Feick 
mailto:wayne.fe...@marklogic.com>> wrote:


Hi Steve,

Technically, Chrome is not on our list of supported browsers
although some of us do use it regularly so it tends to work. We
encountered this issue recently as well in our own testing and it
will be fixed shortly when 5.0-3 is released. Other browsers seem
to be tolerant of multiple "Location" headers if they have the
same value.

The bug should also be fixed in our next 4.2 maintenance release,
although I don't have any information on when that will happen.

Which release are you running?

Wayne.



On 02/29/2012 11:26 AM, Steve Spigarelli wrote:

Executing this code using the Chrome web browser I get:


  "Duplicate headers received from server"


xquery version "1.0-ml";

xdmp:redirect-response("http://example.com";),
xdmp:redirect-response("http://example.com";)


Shouldn't this be something that the MarkLogic server can keep
    from happening? Or is this useful for something as well?

Thanks
Steve


-- 
Wayne Feick

Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com  <mailto:wayne.fe...@marklogic.com>
Phone:+1 650 655 2378  
www.marklogic.com  <http://www.marklogic.com>

This e-mail and any accompanying attachments are confidential. The 
information is intended solely for the use of the individual to whom it is 
addressed. Any review, disclosure, copying, distribution, or use of this e-mail 
communication by others is strictly prohibited. If you are not the intended 
recipient, please notify us immediately by returning this message to the sender 
and delete all copies. Thank you for your cooperation.


___
General mailing list
General@developer.marklogic.com
<mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general




--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Is this a security problem? Chrome thinks so

2012-02-29 Thread Wayne Feick

Hi Steve,

Technically, Chrome is not on our list of supported browsers although 
some of us do use it regularly so it tends to work. We encountered this 
issue recently as well in our own testing and it will be fixed shortly 
when 5.0-3 is released. Other browsers seem to be tolerant of multiple 
"Location" headers if they have the same value.


The bug should also be fixed in our next 4.2 maintenance release, 
although I don't have any information on when that will happen.


Which release are you running?

Wayne.


On 02/29/2012 11:26 AM, Steve Spigarelli wrote:

Executing this code using the Chrome web browser I get:


  "Duplicate headers received from server"


xquery version "1.0-ml";

xdmp:redirect-response("http://example.com";),
xdmp:redirect-response("http://example.com";)


Shouldn't this be something that the MarkLogic server can keep from 
happening? Or is this useful for something as well?


Thanks
Steve


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Adding your own certifications to ML

2011-12-20 Thread Wayne Feick

Hi David,

It's not generally a supported operation yet, although all the 
certificate state is stored in the security database if you know what 
you're doing. What you want is not an impossible task, but it's best to 
open a support ticket to get some guidance.


Wayne.


On 12/20/2011 10:49 AM, Lee, David wrote:


Thanks but thats not it.

I wanted to import into ML server a certificate which was generated by 
a CSR which was NOT generated by ML.


Doesn't appear this is possible/supported.

So I just ended up doing the right thing and generated the CSR and 
sent that to IT to sign.


-David



David A. Lee

Senior Principal Software Engineer

Epocrates, Inc.

d...@epocrates.com <mailto:d...@epocrates.com>

812-482-5224

*From:*general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of *Danny 
Sokolsky

*Sent:* Tuesday, December 20, 2011 1:37 PM
*To:* General MarkLogic Developer Discussion
*Subject:* Re: [MarkLogic Dev General] Adding your own certifications 
to ML


Hi David,

I think this is the procedure:

http://docs.marklogic.com/5.0doc/docapp.xqy#display.xqy?fname=http://pubs/5.0doc/xml/admin/SSL.xml%2321259

-Danny

*From:*general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of *Lee, David

*Sent:* Monday, December 19, 2011 2:50 PM
*To:* General Mark Logic Developer Discussion
*Subject:* [MarkLogic Dev General] Adding your own certifications to ML

I'm trying to add a certificate provided by IT but not created by 
using the "Template" mechanism.


That is, I didnt (silly me) generate a CSR within ML but just asked IT 
for the certificate.

Now I can't see how to import it.  Any suggestions ?

-David



David A. Lee

Senior Principal Software Engineer

Epocrates, Inc.

d...@epocrates.com <mailto:d...@epocrates.com>

812-482-5224



--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Possible to send custom HTTP method from MarkLogic?

2011-11-17 Thread Wayne Feick

No, there is not.

On 11/17/2011 09:40 AM, seme...@hotmail.com wrote:
Some cache servers like Squid and Varnish allow you to send a cache 
purge command to the server using a non-standard HTTP method called 
PURGE. I see in the MarkLogic docs that there are http methods in xdmp 
for GET, POST, PUT, HEAD, DELETE and OPTIONS but nothing for PURGE and 
nothing for a custom method that I can see. Is there a way to send 
custom HTTP methods from MarkLogic?


thanks,
Ryan


--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] how to purge the task server queue?

2011-10-15 Thread Wayne Feick
Oh, nevermind, wasn't following the thread closely enough and responded 
before applying coffee to my brain.


On 10/15/2011 08:27 AM, Wayne Feick wrote:
You could use a scheduled task instead of a task that keeps respawning 
itself, and that'll solve the restart problem.


With the respawn approach, you'll eventually run into a limit on the 
number of respawns (protection against runaway code).



On 10/15/2011 08:22 AM, Geert Josten wrote:


Nice code, thnx!

I couldn’t resist the temptation of writing some code wrapping the 
task server to create alternative queue management. Not unlikely 
MarkLogic Server 5.0 will contain improvements itself, but not sure 
it satisfies all needs. This could be a basis for adding what is yet 
lacking. The extensions of MarkLogic Server pretty much provide 
everything you need to make this work, as my code shows. The biggest 
problem is imitating some kind of cron job. I did so with module that 
spawns itself after a sleep. Does the trick, but does not guard the 
case when it stops. I added a manager interface that allows you to 
monitor it though. It is most suited for background processing anyway 
I think, it only uses idle threads.


It I put my work at github: https://github.com/grtjn/ml-queue

Kind regards,

Geert

*Van:*general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *Namens *Christopher 
Cieslinski

*Verzonden:* donderdag 13 oktober 2011 20:09
*Aan:* General MarkLogic Developer Discussion
*Onderwerp:* Re: [MarkLogic Dev General] how to purge the task server 
queue?


We have run something like this to "brute force" through clearing the 
requests as they get initiated from the queue.  Far from perfect, but 
it "gets the job done," so to speak.


(: # Task Server Job Clear # :)
xquery version "1.0-ml";

declare namespace ss = "http://marklogic.com/xdmp/status/server"; 
<http://marklogic.com/xdmp/status/server>;
declare namespace hs = "http://marklogic.com/xdmp/status/host"; 
<http://marklogic.com/xdmp/status/host>;


let $taskServerId as xs:unsignedLong := 
xdmp:host-status(xdmp:host())//hs:task-server-id

return

(: Run for around 9 minutes (if there are any queued tasks left) 
- hopefully all are cleared by then.  If not, run the script again :)

for $i as xs:integer in (1 to 5400)
return
for $requestId as xs:unsignedLong in 
xdmp:server-status(xdmp:host(), $taskServerId)//ss:request-id/text()

return (
try {
xdmp:request-cancel(xdmp:host(), $taskServerId, 
$requestId)

} catch ($e) {
xdmp:log("Failed to cancel requests, retrying...")
},
xdmp:sleep(100)
)

Chris



*From*: Mike Sokolov
*Sent*: Thursday, October 13, 2011 12:06 PM
*To*: General MarkLogic Developer Discussion
*Subject*: Re: [MarkLogic Dev General] how to purge the task server 
queue?


The "replace my script with one that does nothing, and then after things
have settled down again, put it back the way it was before" hack is so
useful, that it deserves to be enshrined as a function in the library
with a name of its own.  Is there some way that behavior could be
codified meaningfully?
  
For example: xdmp:interrupt-module ($module-name) that prevent that

module from being started until some condition is met (the task server
is cleared? the task server has nothing queued with that name?)
  
-Mike
  
On 10/11/2011 12:19 PM, Geert Josten wrote:

Hi,
  
I think what Jakob has in mind is checking some queue state as soon as a task from the queue gets initiated. At that point the task that is being initiated from the queue could decide to continue or cancel. Rather similar to replacing the module with this 'no-op' suggestion by Damon..
  
But I thought someone on this list mentioned pondering about a separate registration of tasks, and firing them one by one, based on priority etc. Most of it should be feasible with one itself reactivating thread, that spawns others, or perhaps using a cron..
  
Kind regards,

Geert
  
-Oorspronkelijk bericht-

Van:general-boun...@developer.marklogic.com  
<mailto:general-boun...@developer.marklogic.com>  
[mailto:general-boun...@developer.marklogic.com] Namens Danny Sokolsky
Verzonden: dinsdag 11 oktober 2011 17:48
Aan: General MarkLogic Developer Discussion
Onderwerp: Re: [MarkLogic Dev General] how to purge the task server queue?
  
Hi Jakob,
  
request-cancel would not work to clear the queue because the requests have not yet started and therefore do not have an ID. You need to restart the node that has the task server to clear the queue (or do something clever like Damon suggested to make it clear Real Fast).
  
There are some RFEs around providing more control of the task queue, and I put your comments there.

Re: [MarkLogic Dev General] how to purge the task server queue?

2011-10-15 Thread Wayne Feick
cessing after the restart, so careful of the whack-a-mole issue.
  
-Danny
  
-Original Message-

From:general-boun...@developer.marklogic.com  
<mailto:general-boun...@developer.marklogic.com>  
[mailto:general-boun...@developer.marklogic.com] On Behalf Of Jakob Fix
Sent: Tuesday, October 11, 2011 7:53 AM
To: General MarkLogic Developer Discussion
Subject: Re: [MarkLogic Dev General] how to purge the task server queue?
  
Thanks Damon,
  
we restarted the server. When you say "replace the invoked module with

a no-op module" this will not remove tasks currently in the queue,
correct?
  
On a related note, could one write some code around

xdmp:request-cancel() to purge the queue?
In any case, I think it would be a useful addition to the api to be
able to better control the task server.
  
Many thanks for your answer,

Jakob.
  
  
  
On Tue, Oct 11, 2011 at 15:35, Damon Feldman

  <mailto:damon.feld...@marklogic.com>   wrote:



Jakob,

  


Yes, restarting the server will clear the task queue. Also, you can replace the invoked module 
with a no-op module (xdmp:log("skipping queued task") or similar) if no other code is 
calling it. The queue will then "drain" quickly.

  


I don't believe there is a programmatic way to remove queued tasks.

  


Yours,

Damon

  


-Original Message-

From:general-boun...@developer.marklogic.com  
<mailto:general-boun...@developer.marklogic.com>  
[mailto:general-boun...@developer.marklogic.com] On Behalf Of Jakob Fix

Sent: Tuesday, October 11, 2011 8:55 AM

To: General Mark Logic Developer Discussion

Subject: [MarkLogic Dev General] how to purge the task server queue?

  


Hi, we have a long waiting list on our task server and would like to

remove them programmatically. It doesn't look like there is such an

option in the admin: api, or is there? Also, would restarting the

server remove the waiting tasks from the queue?

  


thanks in advance,

Jakob.

___

General mailing list

General@developer.marklogic.com  <mailto:General@developer.marklogic.com>

http://developer.marklogic.com/mailman/listinfo/general

___

General mailing list

General@developer.marklogic.com  <mailto:General@developer.marklogic.com>

http://developer.marklogic.com/mailman/listinfo/general

  

  


___
General mailing list
General@developer.marklogic.com  <mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general
___
General mailing list
General@developer.marklogic.com  <mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general
___
General mailing list
General@developer.marklogic.com  <mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general

___

General mailing list
General@developer.marklogic.com  <mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general



NOTICE: This email message is for the sole use of the intended 
recipient(s) and may contain confidential and privileged information. 
Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.




--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Replication

2011-09-27 Thread Wayne Feick

That's a good question. :-)

I don't know of any customer who has a single cluster the spans multiple 
physical locations, and we don't QA that scenario either. As a result, I 
can't recommend that you run a production environment in that configuration.


If you were to experiment with this sort of topology, I know of the 
following issues.


1. A gigabit internet connection is less aggregate bandwidth than what 
is available when a number of hosts with gigabit interfaces are 
connected to a network switch. The switch has internal hardware that 
allows simultaneous communication between different machines (e.g. A->B 
could see 1GB at the same time as C-D sees an additional 1GB). This may 
be obvious to you, but may not be to everyone reading this list.


2. The latency and bandwidth between data centers would affect your 
ingest performance because each journal frame would require a round trip 
across the internet before the master's transaction could continue.


3. For failover to occur, you must have a quorum of computers which is 
defined as "N/2 + 1". This is what protects you against a network 
partitioning; if you can't communicate with more than half the hosts in 
a cluster, you can't tell if you're on the losing side of a network 
partition. If you were to try to put N hosts in one data center and N 
hosts in another data center, neither one would be able to determine 
that it is the surviving data center in the event of a network problem. 
If you were to try to create a cluster that spans multiple data centers, 
you'd want at least one more machine in a 3rd location that the two data 
centers would use to break the tie.


Wayne.


On 09/27/2011 06:09 AM, Robert Nam wrote:


When you say "the same data center"... do you think that servers in 
separate data centers with a gigabit connection, satisfy this suggestion?



thank you for your help,


-Robert Nam



If they are in the same data center, it would be simpler to put them 
in the same cluster and use local-disk failover to maintain the 
replica copy. You'd need at least 3 machines, but both failover 
between the master/replica and subsequent recovery would be automatic.


Wayne.

On 09/23/2011 12:36 PM, Michael Blakeley wrote:

I assume you are asking about flexible replication?

If so, I think you'll be fine up until that last sentence. Almost all 
flex-rep
state is kept on the master, so the state is lost if the master is 
lost. From
the ex-slave new master's point of view, the ex-master will be a brand 
new slave

with no history. All existing assets will be replicated.

-- Mike

On 23 Sep 2011, at 11:50 , Robert Nam wrote:

Summary: We have a two MarkLogic server configuration. One is 
active/master replicating
to the slave until it is made “offline” because of a failure or 
maintenance. In
this situation, the slave becomes master until it goes offline. Are 
there any

issues with this configuration? What is the best way to configure the
replication between the two servers?

Description: Our configuration consists of two MarkLogic servers. Only 
one server at any

given time will be active (Master) and connected to the Internet by a load
balancer. The Master server will replicate its database/s to the 
second server

that is accessible by the Master but not connected to the Internet.

If the Master server is inoperable, the second server will be promoted 
to Master
and connected to the Internet. Once the original Master server is 
functional

again, it will be considered the Slave and the current Master server will
replicate its database/s to this server. The desired result of the 
replication
is for the new Master to only update/copy on the slave… new assets 
that were

modified on this server.

-Robert

___ General mailing list 
gene...@developer.marklogic.com <http://developer.marklogic.com> 
http://developer.marklogic.com/mailman/listinfo/general


___ General mailing list 
gene...@developer.marklogic.com <http://developer.marklogic.com> 
http://developer.marklogic.com/mailman/listinfo/general


-- Wayne Feick Principal Engineer MarkLogic Corporation 
wayn...@marklogic.com <http://marklogic.com> Phone: +1 650 655 2378 
 www.marklogic.com 
<http://www.marklogic.com>


This e-mail and any accompanying attachments are confidential. The 
information
is intended solely for the use of the individual to whom it is 
addressed. Any
review, disclosure, copying, distribution, or use of this e-mail 
communication
by others is strictly prohibited. If you are not the intended 
recipient, please
notify us immediately by returning this message to the sender and 
delete all

copies. Thank you for your cooperation.

___ General mailing list 
gene...@developer.marklogic.com <http://developer.marklogic.com> 
ht

Re: [MarkLogic Dev General] Replication

2011-09-23 Thread Wayne Feick
If they are in the same data center, it would be simpler to put them in 
the same cluster and use local-disk failover to maintain the replica 
copy. You'd need at least 3 machines, but both failover between the 
master/replica and subsequent recovery would be automatic.

Wayne.


On 09/23/2011 12:36 PM, Michael Blakeley wrote:
> I assume you are asking about flexible replication?
>
> If so, I think you'll be fine up until that last sentence. Almost all 
> flex-rep state is kept on the master, so the state is lost if the master is 
> lost. From the ex-slave new master's point of view, the ex-master will be a 
> brand new slave with no history. All existing assets will be replicated.
>
> -- Mike
>
> On 23 Sep 2011, at 11:50 , Robert Nam wrote:
>
>> Summary:
>> We have a two MarkLogic server configuration.  One is active/master 
>> replicating to the slave until it is made “offline” because of a failure or 
>> maintenance.  In this situation, the slave becomes master until it goes 
>> offline. Are there any issues with this configuration?  What is the best way 
>> to configure the replication between the two servers?
>>
>> Description:
>> Our configuration consists of two MarkLogic servers.  Only one server at any 
>> given time will be active (Master) and connected to the Internet by a load 
>> balancer.  The Master server will replicate its database/s to the second 
>> server that is accessible by the Master but not connected to the Internet.
>>
>> If the Master server is inoperable, the second server will be promoted to 
>> Master and connected to the Internet.   Once the original Master server is 
>> functional again, it will be considered the Slave and the current Master 
>> server will replicate its database/s to this server.  The desired result of 
>> the replication is for the new Master to only update/copy on the slave… new 
>> assets that were modified on this server.
>>
>> -Robert
>> ___
>> General mailing list
>> General@developer.marklogic.com
>> http://developer.marklogic.com/mailman/listinfo/general
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Simple way to decide if a user has a role

2011-09-06 Thread Wayne Feick
You might also look at xdmp:has-privilege() as a way to test for a 
particular execute privilege that you could then assign to the role in 
question.

On 09/06/2011 01:34 PM, Danny Sokolsky wrote:
> Hi Tim,
>
> Why don't you want to create an amp for this?  I suspect any solution you 
> come up with will require privileged operations, and will need an amp.  This 
> is what amps are for: to allow a privileged operation in the context of your 
> application, where you the application developer knows it is safe for people 
> to use this privilege in this context.
>
> -Danny
>
> -Original Message-
> From: general-boun...@developer.marklogic.com 
> [mailto:general-boun...@developer.marklogic.com] On Behalf Of Tim Finney
> Sent: Tuesday, September 06, 2011 1:19 PM
> To: general@developer.marklogic.com
> Subject: [MarkLogic Dev General] Simple way to decide if a user has a role
>
> Hi Everyone,
>
> Is there a simple way to determine whether a user has a particular role
> name? I would like to have a function that I give a role name and which
> returns true if the current user has the role and false otherwise. I
> want this function so that I can make a user interface change available
> options depending on the current user's roles.
>
> I would like the function to work even if the current user doesn't have
> the xdmp-user-roles privilege. I would prefer not to have to create an
> amped function to do this.
>
> My current hack looks like this:
>
> declare function s:user-has-role(
>$role as xs:string
> ) as xs:boolean {
>try {
>  xdmp:role($role) = xdmp:user-roles(xdmp:get-current-user())
>}
>catch ($e) {
>  fn:false()
>}
> };
>
> This works fine if the current user has the xdmp-user-roles privilege.
> However, if the user doesn't have this privilege then the function
> always returns false regardless of whether the user has the specified
> role name.
>
> Best,
>
> Tim Finney
>
>
>
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general

-- 
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Parallel Task Servers

2011-08-22 Thread Wayne Feick
The limit is 256, but it has more to do with the number of CPU cores you 
have on the host balanced with how CPU intensive your application is 
(both tasks and application server activity).


For example, if you're doing lots of xdmp:http-get() operations from the 
task queue, a larger number of threads might be appropriate for 
parallelism. Tasks that tend to wait on disk I/O might also benefit from 
a larger number of threads.


On the other hand, if your tasks are CPU bound instead of I/O bound you 
won't want more than the number of cores on the host.


You should also keep in mind that a large number of task queue threads 
could negatively impact your application server performance since 
they'll be competing with each other for CPU and I/O resources.


Wayne.


On 08/22/2011 02:22 PM, Khan, Kashif wrote:

Is there a limit to the number of threads? I have set it to 10.

Best Regards,

Kashif Khan

Sr. Solutions Architect

Houghton Mifflin Harcourt, Orlando, FL

Office: (407) 345-3420

Cell: (407) 949-4697

"The water you touch in the river is the last of that which has passed 
and the first of that which is coming"*--Leonardo da Vinci*


*
*
*
*

From: Danny Sokolsky <mailto:danny.sokol...@marklogic.com>>
Reply-To: General MarkLogic Developer Discussion 
mailto:general@developer.marklogic.com>>

Date: Mon, 22 Aug 2011 16:54:45 -0400
To: General MarkLogic Developer Discussion 
mailto:general@developer.marklogic.com>>

Subject: Re: [MarkLogic Dev General] Parallel Task Servers

You would need to create a cluster.  Then direct your load at multiple 
hosts in that cluster.  This book:


http://docs.marklogic.com/4.2doc/docapp.xqy#display.xqy?fname=http://pubs/4.2doc/xml/cluster/title.xml

talks about some of that (as well as failover).

For example, if you had a cluster with host1, host2, and host3, then 
you can direct part of your load to host1, part to host2, and part to 
host3.


Now to step back a minute, why do you really need to do this?  The 
task server runs multi-threaded, so if you have extra horsepower on 
your host, you should be able to take advantage of that.


-Danny

*From:*general-boun...@developer.marklogic.com 
<mailto:general-boun...@developer.marklogic.com> 
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of *Khan, 
Kashif

*Sent:* Monday, August 22, 2011 1:26 PM
*To:* General MarkLogic Developer Discussion
*Subject:* Re: [MarkLogic Dev General] Parallel Task Servers

Is there somewhere I can read on creating parallel Hosts and dividing 
the load?


Best Regards,

Kashif Khan

Sr. Solutions Architect

Houghton Mifflin Harcourt, Orlando, FL

Office: (407) 345-3420

Cell: (407) 949-4697

"The water you touch in the river is the last of that which has passed 
and the first of that which is coming"*--Leonardo da Vinci*


*From: *Danny Sokolsky <mailto:danny.sokol...@marklogic.com>>
*Reply-To: *General MarkLogic Developer Discussion 
mailto:general@developer.marklogic.com>>

*Date: *Mon, 22 Aug 2011 16:19:57 -0400
*To: *General MarkLogic Developer Discussion 
mailto:general@developer.marklogic.com>>

*Subject: *Re: [MarkLogic Dev General] Parallel Task Servers

The way you do this is to parallelize the load across multiple hosts.  
Each host has 1 task server.


-Danny

*From:*general-boun...@developer.marklogic.com 
<mailto:general-boun...@developer.marklogic.com> 
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of *Khan, 
Kashif

*Sent:* Monday, August 22, 2011 12:43 PM
*To:* MarkLogic Developer Discussion
*Subject:* [MarkLogic Dev General] Parallel Task Servers

Hello Everyone, I wanted to ask if it is possible to create multiple 
task servers to handle the load. So we can process the tasks on 
multiple task servers at a time.


Best Regards,

Kashif Khan



--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Xinclude in XSLT

2011-07-09 Thread Wayne Feick
Seems like it'd be pretty easy to write an XInclude expansion rule as 
well if that's what you need.


Wayne.


On 07/09/2011 12:03 AM, Geert Josten wrote:


Dear Paul,

No I think it shouldn’t. The XSLT rec doesn’t specify how XIncludes 
should be handled. In fact, I expect that in Oxygen it is not the 
transform, but the xml reader before that, that is doing the actual 
xinclude expansion. So, by calling xinc:node-expand() before passing 
the document into xdmp:xslt-eval you are doing it the same way as Oxygen..


Kind regards,

Geert

*Van:*general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *Namens *Williams, Paul

*Verzonden:* vrijdag 8 juli 2011 21:41
*Aan:* general@developer.marklogic.com
*Onderwerp:* [MarkLogic Dev General] Xinclude in XSLT

I have web page content in the form of docbook xml files that are 
being transformed to xhtml via a stylesheet using xdmp:xslt-eval().  
The docbook content uses an xinclude to incorporate a common module 
that provides a sidebar element for the page.  Running this content 
through an xslt transformer in Oxygen produces the expected xhtml 
output without fail.  But in MarkLogic, the xinclude does not import 
the external module during the transform operation.


As a workaround, I’m now invoking xinc:node-expand() on the document 
before passing it to xdmp:xslt-eval().  Shouldn’t I be able to expect 
the transformer in Marklogic to handle xincludes for me?


-- Paul



--
Wayne Feick
Principal Engineer
MarkLogic Corporation
wayne.fe...@marklogic.com
Phone: +1 650 655 2378
www.marklogic.com

This e-mail and any accompanying attachments are confidential. The information 
is intended solely for the use of the individual to whom it is addressed. Any 
review, disclosure, copying, distribution, or use of this e-mail communication 
by others is strictly prohibited. If you are not the intended recipient, please 
notify us immediately by returning this message to the sender and delete all 
copies. Thank you for your cooperation.

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] More easter eggs: HMAC SHA and AWS

2011-05-10 Thread Wayne Feick
These are the functions which are planned for 5.0. As was pointed out 
elsewhere, since 5.0 is not yet released it's possible something could 
come up that precludes releasing them.


xdmp:hmac-md5()
xdmp:hmac-sha1()
xdmp:hmac-sha256()
xdmp:hmac-sha512()

xdmp:md5()
xdmp:sha1()
xdmp:sha256()
xdmp:sha512()



On 05/09/2011 12:17 PM, Keith L. Breinholt wrote:


Can we get the SHA-2 functions as well?  (SHA-256 and SHA-512 at least.)

Thanks.

- Keith

*From:*general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of *Sam Neth

*Sent:* Monday, May 09, 2011 12:07 PM
*To:* General MarkLogic Developer Discussion
*Subject:* Re: [MarkLogic Dev General] More easter eggs: HMAC SHA and AWS

I stand corrected.

I've just been reminded that some new crypto builtins are coming in 
5.0, including xdmp:hmac-sha1.



Sam Neth
Lead Engineer
MarkLogic Corporation

On May 8, 2011, at 9:42 AM, Sam Neth wrote:



AWS computes signatures based on the encoded data, which I believe to 
be the result of signature checking being in the wrong layer of the 
service architecture.  As a result they are sensitive to differences 
in hex digit capitalization, and to the precise set of reserved 
characters, which is not universally standardized.  Because 
xdmp:url-encode does not conform to their specifications, and may be 
subject to change, a new version was added to ensure compatibility.


I'm sorry to say there's no xdmp:hmac-sha1.  One of the benefits of 
adding undocumented functions to support specific features is a 
reduction in the cost of testing, documentation, and 
completeness. There has been discussion in the past of including a 
full suite of public crypto functions, wrapping more of what's already 
present in the OpenSSL library, but I don't believe there's a plan to 
do that at this time.  Add your voice to the RFE process if you crave 
this functionality.


Sam Neth
Lead Engineer
MarkLogic Corporation

On May 8, 2011, at 8:56 AM, Geert Josten wrote:



Hi,

Browsing through the MarkLogic built-in Modules in search for some 
modules I knew must be there, my eye was caught by the EC2 and AWS 
modules. Scanning through them I noticed the use of the following 
undocumented functions:


- xdmp:aws-url-encode
- xdmp:hmac-sha256

Can anyone explain in which way xdmp:aws-url-encode differs from 
xdmp:url-encode. And since there is an hmac-sha256 function, is there 
a hmac-sha1 function as well?


Kind regards,
Geert
___
General mailing list
General@developer.marklogic.com <mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general

___
General mailing list
General@developer.marklogic.com <mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general



NOTICE: This email message is for the sole use of the intended 
recipient(s) and may contain confidential and privileged information. 
Any unauthorized review, use, disclosure or distribution is 
prohibited. If you are not the intended recipient, please contact the 
sender by reply email and destroy all copies of the original message.





--
Wayne Feick
Principal Engineer
MarkLogic Corporation
Phone +1 650 655 2378
Cell +1 408 981 4576
www.marklogic.com

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Node replace without changing its content

2011-04-09 Thread Wayne Feick
We added XSL support in release 4.2, which allows for an alternative you 
might consider.


Geert's approach is often a good one (you'll need to use a recursive 
function if you have deeper nesting of elements) but as the complexity 
of your transform grows XSL can be simpler. For example, if you changed 
match="name" below to match="something/name" then it would only match 
"name" elements within "something" elements. You could also use 
match="name[@type = 'first']" to only match name elements that have an 
attribute indicating a first name.


   xquery version "1.0-ml";
   declare namespace html = "http://www.w3.org/1999/xhtml";;

   let $stylesheet :=
  http://www.w3.org/1999/XSL/Transform";
  version="2.0">
 
 
   
 
   
 

 
 
   
 
   
 
   

   return xdmp:xslt-eval($stylesheet,
  

  clark

  )

One gotcha to remember is that xdmp:xslt-eval() returns a document node, 
so you might want to extract just the top level element from the result: 
xdmp:xslt-eval(...)/element()


Wayne.


On 04/09/2011 01:00 AM, Geert Josten wrote:


Hi Ambika,

There is no element rename function, so you will have to replace the 
entire element. You can build the new one based on the old one though. 
Something like this:


for $old-element in doc()//name

let $new-element := element {‘firstname’} { $old-element/node() }

return

   xdmp:node-replace($old-element, $new-element)

Add $old-element/@* next to $old-element/node() if you would like to 
preserve the attributes of the element as well.


Kind regards,

Geert

*Van:*general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *Namens *ambika arumugam

*Verzonden:* zaterdag 9 april 2011 8:53
*Aan:* General MarkLogic Developer Discussion
*Onderwerp:* [MarkLogic Dev General] Node replace without changing its 
content


hi all,
I would like to replace the name of node and not it's content can 
anyone explain how to do it in marklogic. As xdmp:node-replace does 
the entire node replace.

i have data like
clark
which i would like to change to clark.

Thanks in advance,
Ambika



--
Wayne Feick
Principal Engineer
MarkLogic Corporation
Phone +1 650 655 2378
Cell +1 408 981 4576
www.marklogic.com

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Obtaining unique IDs for assigning to records

2011-03-11 Thread Wayne Feick
The problem with assigning consecutive IDs is that it limits scalability 
by serializing all inserts on some document that contains the next/last ID.


There is a common design pattern for this sort of operation that begins 
with a unique ID for each inserted document, generated in one of the 
following ways.


1. Use random IDs generated by xdmp:random().
2. Use a hash of something within the document: xdmp:hash64($s as xs:string)

The ID is then used as part of the document's URI.

You can test for the existence of a document before creating it, and in 
doing so acquire a read lock on the URI. That lock will be upgraded to a 
write lock when you insert the document, and in doing so serialize the 
transaction with any other inserts that try to create a document with 
the same ID.


If the document already exists when you check for its existence, you 
need to choose a different ID. A recursive ID selection function is 
useful for this.


   declare function choose-uri() as xs:string
   {
  let $uri := fn:concat("/document-", xdmp:random(), ".xml")
  return if (fn:exists(fn:doc($uri))) then choose-uri() else $uri
   };

It's possible two transactions to simultaneously attempt to insert 
documents with the same ID. They'll both take the read lock, and then 
both attempt to upgrade to a write lock. One will succeed, the other 
will be restarted and (in the case of random) choose a different ID.


If the ID cannot determine the document URI but is part of the inserted 
document, you can use xdmp:lock-for-update() to lock a URI derived from 
the ID even if you don't actually insert a document at that URI. That 
operation paired with a search for a document containing the chosen ID 
ensures no two documents will be inserted with the same ID.


Wayne.


On 03/11/2011 09:00 PM, Tim Meagher wrote:


Hi Folks,

I would like to auto-generate unique identifiers for XML documents, 
but I need to prevent them from being assigned to multiple documents.  
For example, when inputting information into a form, the unique ID 
must be filled in prior to saving the form, but such that other forms 
being created in sequence will obtain the next available unique ID.


I’ll probably create a list of unique IDs to work with(as they will 
map to other unique record identifiers) from which I can set the next 
available unique ID as record IDs are added, but again I would like to 
ensure no conflicts.  For example:


Record IDUnique ID

=== 

RecA1   UID01

RecB2UID02

RecA2   UID03

RecC1UID04

and so on such that adding the next Record ID will automatically have 
a UID of UID05 assigned to it.  I recognize that it is simple enough 
to do this unless an attempt to add 2 new records occurs simultaneously.


Thank you!

Tim Meagher



--
Wayne Feick
Principal Engineer
MarkLogic Corporation
Phone +1 650 655 2378
Cell +1 408 981 4576
www.marklogic.com

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Marklogic forest state

2011-03-11 Thread Wayne Feick
It depends on whether you request that replica forests be included in 
the backup.


From the Admin UI, there is a check box you can select to include 
replica forests.


From the XQuery API, xdmp:database-backup-validate() has a parameter 
that specifies whether to include replica forests.


The backup should not be failing due to forests being in sync 
replicating state. Is your configured master open as master, or are you 
operating in a failed over state where one of the configured replica is 
open and acting as master? In this case, you may get an error if you try 
to do a backup without including replicas.


Wayne.


On 03/11/2011 05:42 PM, knk n wrote:

Hello,

Does marklogic perform backup of the forest which is in sync 
replicating mode? We have replica forest which are always in sync 
replicating mode and due to this backup fails.




--
Wayne Feick
Principal Engineer
MarkLogic Corporation
Phone +1 650 655 2378
Cell +1 408 981 4576
www.marklogic.com

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Royal Road to CPF using XSLT?

2011-03-10 Thread Wayne Feick
You might also consider a precommit trigger.

David Sewell  wrote:


To paraphrase Euclid, I'm guessing there's no royal road to auto-applying XSLT
to a document at load time into the database?

Our use case is simply that for a given directory in one of our databases, we
want to run all XML files through a particular XSLT stylesheet. We have managed
to coexist with MarkLogic for a number of years without needing to use the
Content Processing Framework Guide. Section 6.4.6 of that document is precisely
"Using XSLT Stylesheets Instead of Action Modules". So... we bite the bullet and
read up, or is there a cheat-sheet method?

David s.

--
David Sewell, Editorial and Technical Manager
ROTUNDA, The University of Virginia Press
PO Box 400314, Charlottesville, VA 22904-4314 USA
Email: dsew...@virginia.edu   Tel: +1 434 924 9973
Web: http://rotunda.upress.virginia.edu/
___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general
___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Regarding md5

2010-12-29 Thread Wayne Feick
It looks like the difference has to do with white space. I loaded your 
two documents into my database and ran the following.


   xquery version "1.0-ml";

   let $a := fn:doc("/file1.xml")
   let $b := fn:doc("/file2.xml")
   return (
  xdmp:describe(fn:data($a)),
  xdmp:describe(fn:data($b))
   )

The result was

   xs:untypedAtomic("
  a
  b
")
   xs:untypedAtomic("
  a

   b

")

Wayne.


On 12/29/2010 02:23 AM, Manoj wrote:

Hi wayne,
I have attached  the sample files i created and the xquery code 
for which md5 comaprison returned false.


Thanks & Regards
Manoj

On Wed, Dec 29, 2010 at 2:57 AM, Jason Hunter <mailto:jhun...@marklogic.com>> wrote:


It returns true for me.  You're 100% sure you're getting false?

-jh-

On Dec 28, 2010, at 11:47 PM, Manoj wrote:


Hi Wayne,
  Thanks for the suggestion. We will try using the xdmp:quote
function before calculating md5.

Another thing we noticed but forgot to mention earlier is given
below

In file1.xml we stored the following content
ab
and in file2.xml we store ab

when we executed the following code
let $a := fn:doc('/file1.xml')
let $b := fn:doc('/file2.xml')
let $x := xdmp:md5($a)
let $y := xdmp:md5($b)

return (deep-equal($x,$y))

This returned false. This is more or less similar to first part
of the earlier code but the current one returns a document node.
Ideally in this case also the md5 should return same for both the
$x and $y right? can you throw some light?


Thanks & Regards
Manoj


On Wed, Dec 29, 2010 at 2:16 AM, Wayne Feick
mailto:wayne.fe...@marklogic.com>> wrote:

Hi Manoj,

The md5 function takes a string, so you're only acting on the character 
data. Try using xdmp:quote() to turn the XML tree into text and then calling 
md5() on the quoted text.

Wayne

Manojmailto:manojjayara...@gmail.com>>  
wrote:


Hi all,
   I was trying my hand on the xdmp:md5 api as we intended to
use it for our project. When we trying to understand how this
md5 api works and were trying some scenarios. One such
scenario which we tested is given below
let $xml1 :=ab
let $xml2 :=ab
let $xml11 :='ab'
let $xml21 :='ab'
let $a := xdmp:md5($xml1)
let $b := xdmp:md5($xml2)
let $a1 := xdmp:md5($xml11)
let $b1 := xdmp:md5($xml21)
return (deep-equal($a,$b),deep-equal($a1,$b1))
In the above code md5 value for $a and $b are identical
inspite of $xml2 having additional node. Where as $a1 and $b1
returned different md5 values. It would be great if someone
can throw some light on how md5 value is generated in
marklogic and what are things it considers while generating
the md5 value.
Regards,
Manoj

___
General mailing list
General@developer.marklogic.com
<mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general


___
General mailing list
General@developer.marklogic.com
<mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general



___
General mailing list
    General@developer.marklogic.com
<mailto:General@developer.marklogic.com>
http://developer.marklogic.com/mailman/listinfo/general




--
Wayne Feick
Principal Engineer
MarkLogic Corporation
Phone +1 650 655 2378
Cell +1 408 981 4576
www.marklogic.com

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Regarding md5

2010-12-28 Thread Wayne Feick
Hi Manoj,

The md5 function takes a string, so you're only acting on the character data. 
Try using xdmp:quote() to turn the XML tree into text and then calling md5() on 
the quoted text.

Wayne

Manoj  wrote:



Hi all,
   I was trying my hand on the xdmp:md5 api as we intended to use it for our 
project. When we trying to understand how this md5 api works and were trying 
some scenarios. One such scenario which we tested is given below

let $xml1 :=ab
let $xml2 :=ab
let $xml11 :='ab'
let $xml21 :='ab'
let $a := xdmp:md5($xml1)
let $b := xdmp:md5($xml2)
let $a1 := xdmp:md5($xml11)
let $b1 := xdmp:md5($xml21)
return (deep-equal($a,$b),deep-equal($a1,$b1))
In the above code md5 value for $a and $b are identical inspite of $xml2 having 
additional node. Where as $a1 and $b1 returned different md5 values. It would 
be great if someone can throw some light on how md5 value is generated in 
marklogic and what are things it considers while generating the md5 value.

Regards,
Manoj
___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Native support for hmacsha1

2010-11-08 Thread Wayne Feick

Hi Darin,

These functions are not visible to XQuery.

Exposing OpenSSL functionality such as this to XQuery is something we 
have thought of doing, but we have not yet taken the time to do all the 
necessary design, testing, and documentation work that would be needed.


I've added a note to the internal RFE request mentioning that you are 
interested. If you have a strong need for this, please raise the issue 
with product management.


Wayne.


On 11/03/2010 10:11 AM, McBeath, Darin W (ELS-STL) wrote:


I was curious as to whether there are plans to provide a function for 
calculating a hmacsha1?  Perhaps, one is already there and it is 
simply not documented (or maybe I overlooked it in the documentation).


Anyway, something like this would be useful if one wanted to sign a 
request to publish a message to something like Amazon SNS, or use one 
of the other Amazon services,  etc.  I also noticed a post from Norm 
(when he posted an XQuery-OAuth solution) that he had to use a web 
service to compute the hmacsha1.  But I tend to think native support 
would be the best solution.   I would be interested in your thoughts.


Thanks.

Darin.



--
Wayne Feick
Lead Engineer
MarkLogic Corporation
Phone +1 650 655 2378
Cell +1 408 981 4576
www.marklogic.com

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] question about getting document from another database

2010-08-19 Thread Wayne Feick

 Hi Helen,

Take a look at the docs for xdmp:eval() and xdmp:invoke(), specifically 
the database option. Here's a quick example.


   xdmp:eval(
  'xquery version "1.0-ml";
   declare variable $URI as xs:string external;
   fn:doc($URI)
  ', (xs:QName("URI"), "/foo.xml"),
  
{xdmp:database("OtherDatabase")}
  )

Note that a cross database eval/invoke will always run as a separate 
transaction (either a second update, or at a different commit timestamp 
if it's a query).


I've found the xdmp:function capability very useful in this context to 
call a function in the context of a different database.


   declare function local:foo($arg1, $arg2) { ... };

   xdmp:eval(
  'declare variable $FN as xdmp:function external;
   declare variable $ARG1 external;
   declare variable $ARG2 external;
   xdmp:apply($FN,$ARG1,$ARG2)
  ', (xs:QName("FN"), xdmp:function("local:foo"),
  xs:QName("ARG1"), $x,
  xs:QName("ARG2"), $y),
  
{xdmp:database("OtherDatabase")}
  )

Both of those code snippets are from memory, so they may not be quite right.

Wayne.


On 08/19/2010 10:41 AM, helen chen wrote:

I'm inside marklogic and in database A, if I want to get some document from another 
database B and do something,  like if I want to get the list of document with uri  
pattern "/a/b/*xml" from database B,   how can I do it?

Thanks, Helen
_______
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


--
Wayne Feick
Lead Engineer
MarkLogic Corporation
Phone +1 650 655 2378
Cell +1 408 981 4576
www.marklogic.com

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Range index for multiple date formats

2010-07-06 Thread Wayne Feick
It's normal to process documents (e.g. in your load function if any, or 
using a CPF pipeline) to add an attribute that contains a normalized 
dateTime value that you can index.


Wayne.


On 07/06/2010 10:25 AM, Bob Runstein wrote:
Our application must accept date information in multiple formats.  We 
may receive xs:dateTime, xs:date, xs:gYearMonth and xs:gYear and even 
values such as "Unknown".


We want to be able to create range indexes on the date fields.  
Naturally a dateTime index throws a cast exception when there is no 
time component and we do not want to artificially modify the incoming 
data.  A string index would handle the formats correctly.  Is there a 
performance penalty for using a string index to do range searches on 
dates formatted as above?  What is the order of magnitude of the 
penalty if there is one.  Thanks.


Bob


--
Wayne Feick
Lead Engineer
MarkLogic Corporation
Phone +1 650 655 2378
Cell +1 408 981 4576
www.marklogic.com

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] About transaction log and recovery

2010-06-22 Thread Wayne Feick
I'm not sure if there are any docs available outside the company.

Generally speaking, it's what you'd expect in a transactional system; 
recovery records are written to the journal as a transaction progresses, 
and important records (prepare, commit) cause journal data to be synced 
to disk and wait for the sync to complete before proceeding.

Wayne.


On 06/22/2010 11:15 PM, Ling Ling wrote:
> Hello Wayne,
>
> Thank you for your reply. Yeah, I found it. They are together with data,
> on our striped volume. I am interested in how they are managed, like
> when to write log and when the I/O happens. Could you point out some
> materials that I can read about? Thanks!
>
> Thanks,
> Ling
>
> Wayne Feick wrote:
>
>> Hi Ling,
>>
>> Yes, we maintain a transaction journal for each forest that allows us to
>> recover committed transactions in the event of a failure. In each
>> forest's Journals directory you'll files named Journal# where # is an
>> integer. They are not human readable.
>>
>> Wayne.
>>
>>
>> On 06/22/2010 10:24 PM, Ling Ling wrote:
>>  
>>> Hello,
>>>
>>> When I look at the logs, I only found OS logs and ML server file logs.
>>> These log told what ML server did. In traditional database, there are
>>> redo/undo logs when database writes. Does ML server writes such logs?
>>> Where is it and when does the server write such logs and flush it to
>>> disk? Is it related to the version control in ML server? We are doing
>>> some update and insert with ML server and want to make sure what happens
>>> here. Thank you very much.
>>>
>>> Thanks,
>>> Ling
>>> ___
>>> General mailing list
>>> General@developer.marklogic.com
>>> http://developer.marklogic.com/mailman/listinfo/general
>>>
>>>
>>  
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general
>

-- 
Wayne Feick
Lead Engineer
MarkLogic Corporation
Phone +1 650 655 2378
Cell +1 408 981 4576
www.marklogic.com

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] About transaction log and recovery

2010-06-22 Thread Wayne Feick
Hi Ling,

Yes, we maintain a transaction journal for each forest that allows us to 
recover committed transactions in the event of a failure. In each 
forest's Journals directory you'll files named Journal# where # is an 
integer. They are not human readable.

Wayne.


On 06/22/2010 10:24 PM, Ling Ling wrote:
> Hello,
>
> When I look at the logs, I only found OS logs and ML server file logs.
> These log told what ML server did. In traditional database, there are
> redo/undo logs when database writes. Does ML server writes such logs?
> Where is it and when does the server write such logs and flush it to
> disk? Is it related to the version control in ML server? We are doing
> some update and insert with ML server and want to make sure what happens
> here. Thank you very much.
>
> Thanks,
> Ling
> ___
> General mailing list
> General@developer.marklogic.com
> http://developer.marklogic.com/mailman/listinfo/general
>

-- 
Wayne Feick
Lead Engineer
MarkLogic Corporation
Phone +1 650 655 2378
Cell +1 408 981 4576
www.marklogic.com

___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


Re: [MarkLogic Dev General] MarkLogic "rsync" command - RE: Mac Webdav Client setting xqyfilesasbinary

2010-06-15 Thread Wayne Feick
No, Flexible Replication is about syncing documents from one database to 
another, so it's not what you want.


What you're describing sounds very much like an incremental backup, 
which I've typically seen solved by either timestamp comparison to the 
time of the last backup or by setting a platform/filesystem specific 
"archived" flag on a file that is subsequently cleared if the file is 
modified. Windows has an "archived" flag, but Linux/Unix don't.


I don't know if that helps you or not.

Wayne.


On 06/15/2010 04:09 PM, Lee, David wrote:


What I'm looking for is to take a directory of files on a local 
filesystem and "optimally" push them to a MarkLogic server matching 
the directory structure.


By "optimally" I mean not pushing thousands of files if they havent 
changed since the last time.


Useful for me in 2 cases

1) Making a change to 1 or 2 module file of a hundred and scripting a 
"push" process that takes a second insetead of a minute.


2) Updating say 100 files of 1 million  from the filesystem to ML but 
I dont know which 100 without comparing to whats on the server.


Does "Flexible Replication"  do any of this ? From the title I'm 
guessing its ML to ML  not Filesystem to ML.


But thats just guessing :)

*From:* general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of *Wayne 
Feick

*Sent:* Tuesday, June 15, 2010 6:41 PM
*To:* general@developer.marklogic.com
*Subject:* Re: [MarkLogic Dev General] MarkLogic "rsync" command - RE: 
Mac Webdav Client setting xqyfilesasbinary


I gave a talk at the User Conference that covered Flexible Replication 
in our upcoming 4.2 release. This may do a lot of what you want...


Wayne.


On 06/15/2010 09:05 AM, Lee, David wrote:

I've been thinking of going ahead and prototypeing this.  That is a 
marklogic "rsync" type command.


From my experimentation the way I think would work best is as 
described below (included email thread)


That is to set a property on all files which includes the md5 and 
length (file length in bytes prior to uploading to ML).


Then using client side logic compare the new list of files to whats on 
ML and generate a set of update/insert/delete commands.


I've already done this for a special case and it worked well, so 
thinking of cleaning up the code and making it general purpose.


Although my purposes are for updateing ML ... there's no reason the 
reverse couldnt also be done, to update with minimal operations a 
local filesystem.


The questions I have are :

1) Would anyone be interested in this ?

2) How 'offensive' is storing a property on documents ?  Would this be 
a 'deal killer' ?  Should it be in a private namespace ?


3) How efficient is storing properties ? Does having to 
read,store,update properties negate any time savings from avoiding the 
load ?


 That is, I suspect for some size documents is actually faster just to 
push them unconditionally rather then have to look at properties and 
calculate MD5 sums to decide if to push ...


4) I could avoid properties entirely by calculating the MD5 and length 
on the fly in ML ... however I believe both require serializing the 
document in memory in ML.   The xdmp:md5() takes a string, not a 
document.  And there is no actual size method, that also requires 
serializing the document.


The only way I can think of is to use xdmp:quote(doc(...)) then 
calculate the length and md5 on the server.   My gut feeling is that 
doing this is a very heavy weight operation on large files and would 
be less efficient then just unconditionally pushing the document 
(except maybe on very very slow networks).


Also I'm not sure (and I am highly suspicious its NOT true) that an 
MD5 calculated on a file on local disk wont match xdmp:md5( 
xdmp:quote(doc(...))) for the same file due to serialization 
differences.   Same with length . Thus making this strategy pointless.


-David

*From:* general-boun...@developer.marklogic.com 
<mailto:general-boun...@developer.marklogic.com> 
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of *Lee, David

*Sent:* Friday, June 11, 2010 10:00 AM
*To:* General Mark Logic Developer Discussion
*Subject:* Re: [MarkLogic Dev General] Mac Webdav Client setting 
xqyfilesasbinary


I would LOVE help with this project.   (And yes I just checked in an 
update a half hour ago ... hate to point people at old code :)


I've been thinking of exactly what your saying.  The only thing 
stopping me besides time ... is I haven't figured out how to


make sure the clocks are in sync and what the failure cases are if 
they are not.


What I've done in another project is to use an MD5 checksum.   There 
is a undocumented (its experimental) flag to put which adds a property 
with a MD5 checksum.   xmlsh has a MD5 sum command

Re: [MarkLogic Dev General] MarkLogic "rsync" command - RE: Mac Webdav Client setting xqyfilesasbinary

2010-06-15 Thread Wayne Feick
I gave a talk at the User Conference that covered Flexible Replication 
in our upcoming 4.2 release. This may do a lot of what you want...


Wayne.


On 06/15/2010 09:05 AM, Lee, David wrote:


I've been thinking of going ahead and prototypeing this.  That is a 
marklogic "rsync" type command.


From my experimentation the way I think would work best is as 
described below (included email thread)


That is to set a property on all files which includes the md5 and 
length (file length in bytes prior to uploading to ML).


Then using client side logic compare the new list of files to whats on 
ML and generate a set of update/insert/delete commands.


I've already done this for a special case and it worked well, so 
thinking of cleaning up the code and making it general purpose.


Although my purposes are for updateing ML ... there's no reason the 
reverse couldnt also be done, to update with minimal operations a 
local filesystem.


The questions I have are :

1) Would anyone be interested in this ?

2) How 'offensive' is storing a property on documents ?  Would this be 
a 'deal killer' ?  Should it be in a private namespace ?


3) How efficient is storing properties ? Does having to 
read,store,update properties negate any time savings from avoiding the 
load ?


 That is, I suspect for some size documents is actually faster just to 
push them unconditionally rather then have to look at properties and 
calculate MD5 sums to decide if to push ...


4) I could avoid properties entirely by calculating the MD5 and length 
on the fly in ML ... however I believe both require serializing the 
document in memory in ML.   The xdmp:md5() takes a string, not a 
document.  And there is no actual size method, that also requires 
serializing the document.


The only way I can think of is to use xdmp:quote(doc(...)) then 
calculate the length and md5 on the server.   My gut feeling is that 
doing this is a very heavy weight operation on large files and would 
be less efficient then just unconditionally pushing the document 
(except maybe on very very slow networks).


Also I'm not sure (and I am highly suspicious its NOT true) that an 
MD5 calculated on a file on local disk wont match xdmp:md5( 
xdmp:quote(doc(...))) for the same file due to serialization 
differences.   Same with length . Thus making this strategy pointless.


-David

*From:* general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of *Lee, David

*Sent:* Friday, June 11, 2010 10:00 AM
*To:* General Mark Logic Developer Discussion
*Subject:* Re: [MarkLogic Dev General] Mac Webdav Client setting 
xqyfilesasbinary


I would LOVE help with this project.   (And yes I just checked in an 
update a half hour ago ... hate to point people at old code :)


I've been thinking of exactly what your saying.  The only thing 
stopping me besides time ... is I haven't figured out how to


make sure the clocks are in sync and what the failure cases are if 
they are not.


What I've done in another project is to use an MD5 checksum.   There 
is a undocumented (its experimental) flag to put which adds a property 
with a MD5 checksum.   xmlsh has a MD5 sum command 
(http://www.xmlsh.org/CommandXmd5sum).


I generate a list of all documents with the MD5 sum,  compare against 
local disk then update only changed files, propagating deletes, 
inserts, and updates.   It worked great for one project ... but I have 
not generalized this code yet ...


I'm reluctant to blindly add properties to 'other peoples files' so I 
havent made this into a general utility yet.


Discussion  greatly welcome ! (and help too ... )

-David



David A. Lee

Senior Principal Software Engineer

Epocrates, Inc.

d...@epocrates.com 

812-482-5224

*From:* general-boun...@developer.marklogic.com 
[mailto:general-boun...@developer.marklogic.com] *On Behalf Of *Mike 
Brevoort

*Sent:* Friday, June 11, 2010 9:43 AM
*To:* General Mark Logic Developer Discussion
*Subject:* Re: [MarkLogic Dev General] Mac Webdav Client setting xqy 
filesasbinary


Thanks David, That looks really cool.

I was just looking at the code (that I've seen you are actively 
working on- checkins the last several minutes :)  )and it seems like 
it wouldn't be too hard to create a a sync option for rsync like 
behavior (simpler obviously). If given a source (filesystem) and 
destination (marklogic DB directory) and depth (how far to recurse), 
we should be able to grab a list of all of the files on the server, 
their content-length and last updated dateTime. Then we could compare 
on the source filesystem for new/deleted and by size and date updated 
to decide which files to get and put.


What do you think of that approach? I or someone on my team might be 
willing to take a crack at this.


Also, what's required for others to run xmlsh on windows?

Thanks!

Mike

On Fri, Jun 11, 2010 at 6:19 AM, Lee, David 

Re: [MarkLogic Dev General] Regarding update in dls

2010-05-04 Thread Wayne Feick
Hi Judie,

When we do XInclude expansion, we add base URI attributes that should help you 
here.

Assuming your application hasn't manage to strip the information out (e.g. as a 
side effect of a round trip through a browser) you should be able to create a 
revursive descent parser or an XSLT transform that undoes the original XInclude 
expansion.

Take a peek at the docs for fn:base-uri().

Wayne


Sent from my Android phone using TouchDown (www.nitrodesk.com)

-Original Message-
From: judie pearline [jj_ju...@yahoo.co.in]
Received: 5/4/10 7:16 AM
To: Mark Logic [gene...@developer.marklogic.com]
Subject: [MarkLogic Dev General] Regarding update in dls

Hi all,

I am developing an sample application with document library services. I 
referred the dev guide and I need some clarification from you.
Consider a document which contains a xi:include i.e a link to a sub-document. 
When the document is shown to the user for update, the full document will be 
shown using node-expand.  When the user updates the document(Main document and 
the child documenrt) how the update operation will be stored.

Note: Document update will replace the entire content (It replaces the 
xi:include).

Any suggestion will be helpful.

Regards,
Judie


___
General mailing list
General@developer.marklogic.com
http://developer.marklogic.com/mailman/listinfo/general


RE: [MarkLogic Dev General] How to obtain the version of theContent Processing Framework

2010-04-19 Thread Wayne Feick
I'll chime in with an upgrade subtlety...

When CPF is installed on a database, pipeline configurations (i.e. XML
documents) are copied from the file system to the triggers database. The
pipelines specify which .xqy files are to be invoked, and for all the
standard Mark Logic pipelines those .xqy files are in the file system
and part of the released software (i.e. upgraded with the release).

An upgrade does not, however, recopy the pipeline configurations from
the file system to your triggers database(s). Doing so is not typically
necessary, but when it is you must manually reinstall CPF to get the
latest versions. I believe the release notes typically indicate if a CPF
reinstall is needed (e.g. to take advantage of new features).

Wayne.


On Mon, 2010-04-19 at 12:03 -0700, Tim Meagher wrote: 

> Thank you!
> 
> Tim
> 
> -Original Message-
> From: general-boun...@developer.marklogic.com
> [mailto:general-boun...@developer.marklogic.com] On Behalf Of Geert Josten
> Sent: Monday, April 19, 2010 2:43 PM
> To: General Mark Logic Developer Discussion
> Subject: RE: [MarkLogic Dev General] How to obtain the version of theContent
> Processing Framework
> 
> Hi Tim,
> 
> CPF is not distributed separately, so I don't think it has an individual
> version number. You could however compare the MarkLogic Server version
> numbers. You can use xdmp:version() within xquery, or just look at the
> header in the Admin interface..
> 
> Kind regards,
> Geert
> 
> >
> 
> 
> drs. G.P.H. (Geert) Josten
> Consultant
> 
> 
> Daidalos BV
> Hoekeindsehof 1-4
> 2665 JZ Bleiswijk
> 
> T +31 (0)10 850 1200
> F +31 (0)10 850 1199
> 
> mailto:geert.jos...@daidalos.nl
> http://www.daidalos.nl/
> 
> KvK 27164984
> 
> P Please consider the environment before printing this mail.
> De informatie - verzonden in of met dit e-mailbericht - is afkomstig van
> Daidalos BV en is uitsluitend bestemd voor de geadresseerde. Indien u dit
> bericht onbedoeld hebt ontvangen, verzoeken wij u het te verwijderen. Aan
> dit bericht kunnen geen rechten worden ontleend.
> 
> > From: general-boun...@developer.marklogic.com
> > [mailto:general-boun...@developer.marklogic.com] On Behalf Of
> > Tim Meagher
> > Sent: maandag 19 april 2010 19:39
> > To: 'General Mark Logic Developer Discussion'
> > Subject: [MarkLogic Dev General] How to obtain the version of
> > the Content Processing Framework
> >
> > Can someone tell me how to determine what version of the
> > Content Processing Framework I have installed on our
> > MarkLogic servers?
> >
> >
> >
> > Thank you!
> >
> >
> >
> > Tim Meagher
> >
> >
> >
> >
> ___
> General mailing list
> General@developer.marklogic.com
> http://xqzone.com/mailman/listinfo/general
> 
> 
> ___
> General mailing list
> General@developer.marklogic.com
> http://xqzone.com/mailman/listinfo/general


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Passing authentication information in a URL

2010-04-16 Thread Wayne Feick
You could give your default user read permission to the images (and any
other content that you don't want to protect) and then you don't need to
log in at all.

Normal application level login could then be used for sensitive content.
See the xdmp:login() function.

Wayne.


On Fri, 2010-04-16 at 11:58 -0700, Lee, David wrote: 

> Now that I've solved the "how to make app users" problem ... (Thanks
> to Danny and Mike !)
> 
> I now have a new and more exciting problem !
> 
>  
> 
> I have a protected app say running on
> 
> http://host:8012/html/myapp
> 
>  
> 
> I've setup a read-only executable user and the app works great.
> 
> Part of the app generates an HTML document from XML.  There are image
> references in that document.  These images reside in the ML DB.
> 
> Using some tricks I found on the forums I use a link like this:
> 
>  
> 
> http://host:8012/html/common/getdbfile.xquery?uri=/SPL/20100114_fa3ed180-298a-4f9d-9d05-15182d7218bf/5c309ddf-b803-4ee1-98f6-81f4b21d9341-04.jpg
> 
>  
> 
> The xquery script "getdbfile.xquery"  sets the content type based on
> the URI suffix and ends with a simple fn:doc($uri)
> 
>  
> 
> All worked great until I protected my app ... Now (of course) these
> calls are failing.
> 
>  
> 
>  
> 
> My first idea is I would need to make an unprotected App Server that
> only has this script and does some special checking to make sure only
> images are returned (which are not a security risk ... today).
> 
>  
> 
> But ... I'd rather it go through the normal authentication 
> 
>  
> 
> Q1) Can I pass in the user/password into the URI ? Is that supported
> by ML ?  (like in FTP or XCC it would be
> http://user:passw...@host:port )
> 
> Even if so though, thats a security hole because then the user/pass is
> sent back as plain text in the HTML. ... yuck.
> 
>  
> 
> Q2) Is there a way to pass in my 'logged in user' authentication
> somehow as a session ID or other tag which doesnt expose the user/pass
> in the clear ?
> 
> I suppose I could write a custom app to do this, and use a "secret
> encoding" , expose the app as an unprotected app and do magic tricks
> to validate the request ...
> 
> But maybe there is something in ML that does this more directly ... ?
> 
>  
> 
> Suggestions welcome ...
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
> 
> 
> David A. Lee
> 
> Senior Principal Software Engineer
> 
> Epocrates, Inc.
> 
> d...@epocrates.com
> 
> 812-482-5224
> 
>  
> 
> 


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


RE: [MarkLogic Dev General] tail-recursion with xdmp:spawn

2010-04-07 Thread Wayne Feick
That can works too, but you should be sure to consider whether documents
[N,N+1000) at the time of the spawn are guaranteed to still be [N,N
+1000] when the task gets around to executing. If you're adding,
removing, or modifying documents, you'll get different document ordering
over time.

Wayne.



On Wed, 2010-04-07 at 07:53 -0700, Geert Josten wrote: 

> > Another approach is to have an initial query create a list of
> > documents to process and cut it into chunks (say 1000
> > documents each) that are each handed off to a spawned task.
> > With this, the configured number of threads in the task queue
> > will run in parallel giving you higher overall throughput.
> 
> 
> That is what I more or less did. However, I decided to pass in a start
> and end count, to have the function find out for itself which
> documents that should be.
> 
> And one other thing I did was divide the total range into a number of
> segments equal to the number of parallel threads I would want to run,
> which like this can be smaller than the total number of threads
> allowed on the task server, giving more room for normal traffic. So
> instead of spawning all tasks at once (cluttering the queue as well),
> each task creates the next one. A simple try catch makes sure the
> recursive spawning isn't terminated before the end..
> 
> Kind regards,
> Geert


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] tail-recursion with xdmp:spawn

2010-04-07 Thread Wayne Feick
That would work well and has the advantage that it can operate on an
arbitrarily large set of documents. The downside is that you'll only
have one processing thread working at a time.

Another approach is to have an initial query create a list of documents
to process and cut it into chunks (say 1000 documents each) that are
each handed off to a spawned task. With this, the configured number of
threads in the task queue will run in parallel giving you higher overall
throughput.

Wayne.

On Wed, 2010-04-07 at 06:26 -0700, Mike Sokolov wrote: 

> Perhaps this won't be news to others on the list, but I was so excited 
> to finally stumble on a solution to a problem I have been struggling 
> with for years, that I just had to share.
> 
> The problem: how to process a large number of documents using xquery only?
> 
> This can't be done easily because if all the work is done in a single 
> transaction, it eventually runs out of time and space.  But xquery 
> modules don't provide an obvious mechanism for flow control across 
> multiple transactions.
> 
> In the past I've done this by writing an "outer loop" in Java, and more 
> recently I tried using CPF.  The problem with Java is that it's 
> cumbersome to set up and requires some configuration to link it to a 
> database.  I had some success  with CPF, but I found it to be somewhat 
> inflexible since it requires a database insert or update to trigger 
> processing.  It also requires a bit of configuration to get going.  
> Often I find I just want to run through a set of existing documents and 
> patch them up in some way or another, (usually to clean up some earlier 
> mistake!)
> 
> Finally I hit on the solution: I wrote a simple script that fetches a 
> batch of documents to be updated, processes the updates, and then, using 
> a new statement after ";" to separate multiple transactions, re-spawns 
> the same script if there is more work to be done after logging some 
> indication of progress.  Presto - an iterative processor.  This 
> technique is a little sensitive to running away into an infinite loop if 
> you're not careful about the termination condition, but it has many 
> advantages over the other methods.
> 
> What do you think?
> 
> 
> Michael Sokolov
> Engineering Director
> www.ifactory.com
> @iFactoryBoston
> 
> PubFactory: the revolutionary e-publishing platform from iFactory
> 
> ___
> General mailing list
> General@developer.marklogic.com
> http://xqzone.com/mailman/listinfo/general


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Image cropping and binary manipulation

2010-03-30 Thread Wayne Feick
There is no direct support for image manipulation within the server. The
usual approach is to set up a local web app that you can call from
XQuery to do the manipulation for you.

Wayne.


On Tue, 2010-03-30 at 15:10 -0700, Keith L. Breinholt wrote: 

> Is there a way to manipulate the content of a binary file from in the
> database?
> 
>  
> 
> We have a need to crop an image file.
> 
>  
> 
> Any help is appreciated.
> 
>  
> 
> Keith L. Breinholt
> 
> breinhol...@ldschurch.org
> 
> "Do what you can, with what you have, where you are." Theodore
> Roosevelt
> 
>  
> 
> 
> 
> 
> NOTICE: This email message is for the sole use of the intended
> recipient(s) and may contain confidential and privileged information.
> Any unauthorized review, use, disclosure or distribution is
> prohibited. If you are not the intended recipient, please contact the
> sender by reply email and destroy all copies of the original message.
> 
> 
> 


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


RE: [MarkLogic Dev General] Unique key construction

2010-03-29 Thread Wayne Feick
Hi Keith,

Yes, various hash algorithms, including SHA-1 and -2, are part of the
OpenSSL library we're using under the covers. We have had internal
discussions about exposing them at some point, but not seriously enough
to get the work scheduled into any particular release.

I'll note this as some interest on your part.

Wayne.


On Mon, 2010-03-29 at 12:04 -0700, Keith L. Breinholt wrote: 

> True, request IDs are not unique unto themselves.  That was the reason for an 
> MD5 hash of the identifier/URI, host name, timestamp and request ID ... to 
> reduce the odds of a collision.
> 
> But MD5 is not without collisions either. :(
> 
> On that subject: Mike, Walter or Wayne; is there any chance of getting a 
> SHA-1 or SHA-2 function in the near future?  The server must have one 
> available since the implementation of HTTPS/SSL, right?
> 
> Keith L. Breinholt
> breinhol...@ldschurch.org
> 
> 
> -Original Message-
> From: general-boun...@developer.marklogic.com 
> [mailto:general-boun...@developer.marklogic.com] On Behalf Of Geert Josten
> Sent: Monday, March 29, 2010 12:26 PM
> To: General Mark Logic Developer Discussion
> Subject: RE: [MarkLogic Dev General] Unique key construction
> 
> Hi Keith,
> 
> Note that the request ID is not guaranteed to be unique either (just like 
> xdmp:random). It has been discussed previously on this mailinglist here 
> http://markmail.org/thread/v326vhjvbuhitckp and this lengthy thread here 
> http://marklogic.markmail.org/thread/bxklh6dwoctcsjcy (further down from 
> where Michael Blakeley joins the thread). The second thread does contain some 
> code that keeps track of a counter, if I recall correctly..
> 
> Kind regards,
> Geert
> 
> > -Original Message-
> > From: general-boun...@developer.marklogic.com
> > [mailto:general-boun...@developer.marklogic.com] On Behalf Of
> > Keith L. Breinholt
> > Sent: maandag 29 maart 2010 19:24
> > To: General Mark Logic Developer Discussion
> > Subject: RE: [MarkLogic Dev General] Unique key construction
> >
> > Good point.
> >
> >
> >
> > So, to guarantee a UUID that is unique to the URI and the
> > transaction we then change the function to use the request
> > key in the hash like this:
> >
> >
> >
> > let $hash := xdmp:md5( concat( $uri,
> > xs:string(current-dateTime()), $namespace,
> > xs:string(xdmp:request()) ) )
> >
> >
> >
> > If we need a unique UUID for every call of the function we
> > then need to keep a count that increments with each call of
> > the function and tracks the clock ticks between calls.
> >
> >
> >
> > Keith L. Breinholt
> >
> > breinhol...@ldschurch.org 
> >
> >
> >
> > From: general-boun...@developer.marklogic.com
> > [mailto:general-boun...@developer.marklogic.com] On Behalf Of
> > Walter Underwood
> > Sent: Monday, March 29, 2010 10:55 AM
> > To: General Mark Logic Developer Discussion
> > Subject: Re: [MarkLogic Dev General] Unique key construction
> >
> >
> >
> > Using timestamps plus a random number is a dangerous way to
> > build UUIDs. The birthday paradox makes it pretty easy to
> > generate dupes inbetween clock ticks. I know, because my UUID
> > generator did that about 15 years ago.
> >
> >
> >
> > The usual solution requires that you know the resolution of
> > your clock. Fill the low-order bits of the timestamp with a
> > sequence number that increments for each UUID generated. When
> > that sequence field hits the limit, you busy-wait on the
> > clock until it ticks.
> >
> >
> >
> > That's what the DCE code did, and my non-unique UUIDs went
> > away after I did it too.
> >
> >
> >
> > wunder
> >
> > ==
> >
> > Walter Underwood
> >
> > walter.underw...@marklogic.com
> >
> >
> >
> > On Mar 29, 2010, at 9:35 AM, Darin McBeath wrote:
> >
> >
> >
> >
> >
> > Yes, that would likely be more efficient ... forgot that his
> > function had been added :-)
> >
> >
> >
> >
> >
> > 
> >
> > From: Michael Blakeley 
> > To: General Mark Logic Developer Discussion
> > 
> > Cc: "McBeath, Darin W (ELS-STL)" 
> > Sent: Mon, March 29, 2010 12:18:26 PM
> > Subject: Re: [MarkLogic Dev General] Unique key construction
> >
> > Wouldn't it be more efficient to add xdmp:elapsed-time()?
> >
> > -- Mike
> >
> > On 2010-03-29 08:19, McBeath, Darin W (ELS-STL) wrote:
> > > I believe this could be fixed (if necessary) by wrapping
> > the current-dateTime() call with a xdmp:eval.
> > >
> > > xdmp:eval("current-dateTime()")
> > >
> > > Darin.
> > >
> > > -Original Message-
> > > From: general-boun...@developer.marklogic.com on behalf of
> > Geert Josten
> > > Sent: Mon 3/29/2010 10:00 AM
> > > To: General Mark Logic Developer Discussion
> > > Subject: RE: [MarkLogic Dev General] Unique key construction
> > >
> > > Note: this code will return only one unique value per
> > transaction. current-datetime() returns the same value
> > throughout the transaction as MarkLogic Server is based on
> > point-in-time querying..
> > >
> > > Kind regar

Re: [MarkLogic Dev General] Unique key construction

2010-03-29 Thread Wayne Feick
We tend to use one of two approaches, depending on what we're doing.

For things like HTTP servers, we use xdmp:random() (it's in group.xsd as
the default value for http-server-id). 

For users and groups we use a hash based on the user or group name so
that we can detect an attempt to create the user or group multiple times
in the same transaction (via a conflicting update of the same URI).

Wayne.


On Sun, 2010-03-28 at 08:43 -0700, deepak mohan wrote: 

> Hi All,
> 
> Please tell me how ML constructs the unique ID while creating any ML
> entities(App Servers, DB, forest etc.). I need to generate a unique
> ID. I tried dig into the ML Admin modules APi, I couldnot find the
> algo. Are they using random() and check for existence?
> 
> Thanks,
> Deepak Mohanakrishnan.
> 
> 
> 
> __
> 
> Your Mail works best with the New Yahoo Optimized IE8. Get it NOW!.


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Invalid coercion and cts:query

2010-03-19 Thread Wayne Feick
Oh, and one other thing was pointed out to me offline. The
search:resolve() function is expecting an element, not a cts:query, so
you do actually need to convert the cts:or-query to an element. If you
change your third line to the following I think you'll be in good shape.

$query := {cts:or-query((cts:query($englishQuery),
cts:query($germanQuery)))}/element()

Wayne.

On Fri, 2010-03-19 at 11:24 -0700, Wayne Feick wrote:

> Hi Adam,
> 
> You don't need that cts:query() wrapper around everything, just use
> the cts:or-query() directly.
> 
> The cts:query() function creates a query from an XML representation of
> it (e.g. if you had placed a query into a document).
> 
> Wayne.
> 
> 
> On Fri, 2010-03-19 at 11:06 -0700, Adam Patterson wrote: 
> 
> > Hi, newbie to XQuery and Marklogic here. I’m trying to do something
> > like the following:
> > 
> >  
> > 
> > ...
> > 
> > $englishQuery := search:parse($queryString,
> > $englishOptions),
> > 
> > $germanQuery := search:parse($queryString,
> > $germanOptions),
> > 
> > $query :=
> > cts:query( cts:or-query((cts:query($englishQuery),
> > cts:query($germanQuery, 
> > 
> > $results := search:resolve($query,
> > bhccSearch:getSearchOptions($target, $page))
> > 
> > return ...
> > 
> >  
> > 
> > I have content which is English, and other content which is German,
> > and I am trying to build queries which will search both the English
> > and German content, and then build a cts:or-query from those, and
> > finally build a cts:query from the cts:or-query which I can use with
> > search:resolve to return results. I keep getting errors like this:
> > 
> >  
> > 
> > XDMP-ARGTYPE: (err:XPTY0004)
> > cts:query(cts:or-query((cts:word-query("pa", ("lang=en"), 1),
> > cts:word-query("pa", ("lang=de"), 1 -- arg1 is not of type
> > element()
> > 
> >  
> > 
> > So I guess the cts:or-query constructor is not returning me an
> > element as needed by the cts:query constructor. Can someone explain
> > why this is and how I should handle this?
> > 
> >  
> > 
> > Cheers,
> > 
> >  
> > 
> > Adam
> > 
> > 
> 
> 


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Invalid coercion and cts:query

2010-03-19 Thread Wayne Feick
Hi Adam,

You don't need that cts:query() wrapper around everything, just use the
cts:or-query() directly.

The cts:query() function creates a query from an XML representation of
it (e.g. if you had placed a query into a document).

Wayne.


On Fri, 2010-03-19 at 11:06 -0700, Adam Patterson wrote: 

> Hi, newbie to XQuery and Marklogic here. I’m trying to do something
> like the following:
> 
>  
> 
> ...
> 
> $englishQuery := search:parse($queryString,
> $englishOptions),
> 
> $germanQuery := search:parse($queryString,
> $germanOptions),
> 
> $query :=
> cts:query( cts:or-query((cts:query($englishQuery),
> cts:query($germanQuery, 
> 
> $results := search:resolve($query,
> bhccSearch:getSearchOptions($target, $page))
> 
> return ...
> 
>  
> 
> I have content which is English, and other content which is German,
> and I am trying to build queries which will search both the English
> and German content, and then build a cts:or-query from those, and
> finally build a cts:query from the cts:or-query which I can use with
> search:resolve to return results. I keep getting errors like this:
> 
>  
> 
> XDMP-ARGTYPE: (err:XPTY0004)
> cts:query(cts:or-query((cts:word-query("pa", ("lang=en"), 1),
> cts:word-query("pa", ("lang=de"), 1 -- arg1 is not of type
> element()
> 
>  
> 
> So I guess the cts:or-query constructor is not returning me an element
> as needed by the cts:query constructor. Can someone explain why this
> is and how I should handle this?
> 
>  
> 
> Cheers,
> 
>  
> 
> Adam
> 
> 


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Need dls:document-update-version-file($uri, $node, $version) function

2010-03-17 Thread Wayne Feick
Your create trigger could skip calling dls:document-manage() if the
document is empty. That possibly combined with a periodic background
task that goes ahead and manages those empty documents if they aren't
updated within a minute or two would do the trick.

Are you planning on updating managed documents later on? You'll run into
permissions problems if you do that...



On Wed, 2010-03-17 at 14:16 -0700, Keith L. Breinholt wrote: 

> We have a need for users to add managed content via WebDAV.
> 
>  
> 
> This seemed pretty straight forward by adding a create trigger that
> calls dls:document-manage() on all documents on creation.
> 
>  
> 
> However, Win7 and Vista WebDAV clients both write a zero byte file
> first and then overwrite it with the real contents of the file. 
> 
>  
> 
> Because of this version 1 of any document created by these Windows
> WebDAV clients are empty.  I’ve looked for any way to update version 1
> of a file but I don’t see any way of updating them except to delete
> the managed document and start over.
> 
>  
> 
> Is there another way?  If not we could really use a function
> dls:document-update-version-file( $uri, $node, $version ).
> 
>  
> 
> Keith L. Breinholt
> 
> Missionary & Public Affairs Portfolio
> 
> breinhol...@ldschurch.org
> 
> "Do what you can, with what you have, where you are." Theodore
> Roosevelt
> 
>  
> 
> 
> 
> 
> NOTICE: This email message is for the sole use of the intended
> recipient(s) and may contain confidential and privileged information.
> Any unauthorized review, use, disclosure or distribution is
> prohibited. If you are not the intended recipient, please contact the
> sender by reply email and destroy all copies of the original message.
> 
> 
> 


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] "Hot Swapping" large data sets.

2010-03-17 Thread Wayne Feick
I'd suggest looking into collections or directories to constrain queries
to one set or the other such that one is the live set you're serving up
and the other is the set you're updating.

You might also consider using the Library Services API where the updates
operate on the most recent version of each document while you serve up
whatever version was most recent at some fixed point in time before the
updates began.

The third approach would be to use point in time queries (you'll need to
set the merge timestamp on the database's merge policy page) such that
you're serving up content from a fixed commit timestamp before your
changes while your update process is actively changing the database. We
don't generally recommend people use point in time queries since there
is almost always a better way to do what they want, but this particular
case is the one situation where it makes sense to consider it.

Wayne.


On Wed, 2010-03-17 at 05:23 -0700, Lee, David wrote: 

> I need to be updating some largish (1G+) sets of documents fairly
> atomically.
> 
> That is, I'd like to update all the documents and perform some
> operations like adding properties etc, 
> 
> then all at once make the updates visible.   The update process could
> take several hours.
> 
> Currently this document set shares the same forest as other document
> sets.
> 
> Its not possible to split these up because the app needs cross-query
> across all the document sets. 
> 
>  
> 
> Any suggestions on how to accomplish this ?
> 
> 
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
> 
> 
> David A. Lee
> 
> Senior Principal Software Engineer
> 
> Epocrates, Inc.
> 
> d...@epocrates.com
> 
> 812-482-5224
> 
>  
> 
> 


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


RE: [MarkLogic Dev General] smtp relay on amazon ec2

2010-03-15 Thread Wayne Feick
The best way to send email from MarkLogic server an EC2 instance is to
configure MarkLogic server to use localhost, and then configure your
local mailer to do an authenticated (and possibly encrypted) relay
through some other SMTP server that you have access to.

Wayne.

On Mon, 2010-03-15 at 08:40 -0700, Lee, David wrote: 

> I just tried this on my ML server on EC2.
> Startup sendmail
> 
> 
>   service sendmail start
> 
> 
> Mail stuff
>   mail u...@domain.com
>   ...
> 
> 
> Worked to 1 domain but not another.
> 
> 
> There's nothing technically stopping you from sending email directly
> from EC2, you can use a local sendmail server as above (set your mail
> host to localhost) .. BUT
> The IP addresses of EC2 instances are known as transient/dynamic IP
> addresses and are blocked by many blacklist services so the chances of
> the email getting to an arbitrary user are low.
> If you are the recipient you may be able to configure your system to
> allow the mail.
> 
> An alternative is to run a sendmail server from a dedicated IP somewhere
> and route mail through that.
> 
> 
> 
> 
> David A. Lee
> Senior Principal Software Engineer
> Epocrates, Inc.
> d...@epocrates.com
> 812-482-5224
> 
> 
> 
> 
> -Original Message-
> From: general-boun...@developer.marklogic.com
> [mailto:general-boun...@developer.marklogic.com] On Behalf Of Andrew
> Welch
> Sent: Monday, March 15, 2010 6:57 AM
> To: General Mark Logic Developer Discussion
> Subject: [MarkLogic Dev General] smtp relay on amazon ec2
> 
> Hi,
> 
> Does anyone know if its possible to send email alerts from an instance
> running on Amazon EC2?  What should the SMTP Relay be set to?
> 
> thanks
> 


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] ML: Task Server Persistence Issue

2010-03-08 Thread Wayne Feick
That is correct, and is the intended behavior. If you need guaranteed
execution, you may want to learn more by reading the Content Processing
Framework (CPF) documentation. At a high level, CPF is a state machine
implementation that allows you to add xquery main modules that act upon
documents. The CPF framework ensures that your processing happens even
if there is a server restart.

If CPF does not address your needs, I recommend you write state to a
database to track what processing needs to happen, and look at the
database online event as a way to reinitiate processing after a server
restart.

Wayne.


On Mon, 2010-03-08 at 05:40 -0800, SinghDang, Balvinder (ELS-OXF)
wrote: 

> Hi All,
>  
> We are currently developing an application for storing XML Content in
> ML database, and noticed couple of issues 
>  
> Task Persistence Issue: Tasks queued in the Task Server do not seem to
> survive a server restart. This behaviour is in contrast to JMS servers
> that usually provide multiple persistence options (file journal,
> database etc).
>  
> Graceful Shutdown Issue: During a server shutdown, tasks being
> executed by Task Server at that point (in the 4 threads) “may” not
> complete execution. We checked this using a Task that takes 10sec (we
> used the sleep API) to execute. ML server shuts down in about 4 secs.
> We noticed that the entry into the task was logged, but the exit from
> the task was not logged - suggesting that the task started execution
> but did not complete it.
>  
> Just wanted to check, if there is a recommended workaround from
> MarkLogic to achieve message persistence? Any workaround that any of
> you would have used in any of the projects. 
>  
> Any help is greatly appreciated.
>  
> Thanks,
> Balvinder Dang
>  
>  
> 
> Elsevier Limited. Registered Office: The Boulevard, Langford Lane, 
> Kidlington, Oxford, OX5 1GB, United Kingdom, Registration No. 1982084 
> (England and Wales).


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] re: Sending parameters with xdmp:http-post

2010-02-03 Thread Wayne Feick
Hi Bob,

Can you give me a little more information on why the server certificates
don't match their host names?

Re: ignoring the server certificate, xdmp:http-get() and related
functions allow you to specify a false option
to suppress any validation of the certificate. It looks like the docs
didn't get updated to include this option though. I'll file a bug on
that.

Note, however, that if you turn off certificate verification you leave
yourself open to using expired certificates, revoked certificates, and
man in the middle attacks. Disabling verification is intended more as a
short term fix while you get your certificates sorted out, and only to
be done with an understanding and acceptance of the security risks it
introduces.

Re: handling of your client certificate and private key, storing it in
the database is a reasonable approach. Just be sure to protect it
appropriately (e.g. visible only to some role that is given to an amped
function when needed).

Wayne.


On Wed, 2010-02-03 at 09:42 -0800, Runstein, Robert E. (Contr) (IS)
wrote:

> Thanks, Geert.
> 
> Next issue is connecting machine to machine with 2 way SSL. I have two
> problems:
> 
> First, in the development environment my destination server
> certificates do not match the server host name.  How can I tell
> MarkLogic to ignore hostname verification ? I’d rather not create
> self-signed certificates for each developer’s workstation acting as a
> server.
> 
> Second, I see where I pass the client cert in PEM format into the
> options node as well as the cert password.   I’m thinking that I could
> just save the cert as a text document in the database and read that in
> as needed.  Does that seem reasonable or is there a better way to do
> that (note that it will always be the same certificate for my
> application).
> 
> Bob
> 
> >Hi Robert,
> 
> >You need to encode the params into key-value pairs into the data
> element of the options, and specify url-encoded content type. That
> should do the trick:
> 
> >xdmp:http-post("http://localhost:/test/show-request.xqy";,
> 
> application/x-www-form-urlencoded
> 
> foo=bar)
> 
> >Kind regards,
> 
> >Geert
> 
> _
> From: Runstein, Robert E. (Contr) (IS)
> Sent: Tuesday, February 02, 2010 2:01 PM
> To: 'General Mark Logic Developer Discussion'
> Subject: Sending parameters with xdmp:http-post
> 
> I have an external service that requires two named parameters via HTTP
> POST.  I have sample code from the service provider for sending the
> parameters from Java, but as my processing is all in XQuery, I’d
> prefer to use xdmp:http-post.
> 
>  I see the examples for xdmp:http-post where an XML or SOAP document
> can be posted, but how do I specify the named parameters required by
> the service?  Do I need to create an HTML page containing a form with
> named input tags corresponding to the parameter names and pass the
> page within the options data element or is there another way?
> 
> e.g.,
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Bob
> 


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Is a merge required after using xdmp:node-replace (or insert)?

2010-02-03 Thread Wayne Feick
The system will do merges automatically, assuming you haven't disabled
them. You shouldn't need to manually initiate a merge.

You can look at the database status in the Admin UI to see the number of
deleted fragments that have not yet been merged away.

Wayne.



On Wed, 2010-02-03 at 09:03 -0800, Tim Meagher wrote: 

> Hi Folks,
> 
>  
> 
> I'm wondering what actually happens to content that is revised using
> xdmp:node-replace and if a merge is required afterwards.  (I have
> revised over 800,000 documents.)
> 
>  
> 
> Thank you!
> 
>  
> 
> Tim Meagher
> 
>  
> 
> 


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Attribute node replacement

2010-02-02 Thread Wayne Feick
xquery version "1.0-ml";

xdmp:document-insert("/test.xml", );
fn:doc("/test.xml");
xdmp:node-replace(fn:doc("/test.xml")/one/two/@this, attribute that
{"bar"});
fn:doc("/test.xml")

On Tue, 2010-02-02 at 17:37 -0800, Tim Meagher wrote: 

> Would someone show me what the syntax is for replacing the value of an
> attribute node using xdmp:node-replace()?
> 
>  
> 
> Thank you!
> 
>  
> 
> Tim Meagher
> 
> 


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


RE: [MarkLogic Dev General] Regarding roles and permission in Marklogic

2010-01-22 Thread Wayne Feick
Not quite, Geert. A protected collection is only about who can add
documents to the collection. It has no impact on what permissions you
can put on documents in the collection.

Wayne.


On Fri, 2010-01-22 at 09:56 -0800, Geert Josten wrote:

> Judie,
> 
> Wayne makes a good point. You still need to add the necessary
> permissions to the document as well, the protected collection just
> makes sure you cannot add permissions to the document that are not
> allowed by the collection. You can make things easier by assigning
> default permissions to the user that is adding the documents.
> 
> When using protected collections, I do recommend to create 5 roles: 4
> to represent the access rights read, insert, update and execute, and a
> last one which you assign default permissions for the other 4 roles.
> You will see that this will give optimal flexibility..
> 
> For more thorough understanding of all security features, I recommend
> reading the Security Guide available here:
> http://developer.marklogic.com/pubs/
> 
> Kind regards,
> Geert 


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


RE: [MarkLogic Dev General] Regarding roles and permission in Marklogic

2010-01-22 Thread Wayne Feick
Just to clarify a little, it depends on what you mean by "access". Geert
is correct that you can use a protected collection to restrict which
roles can add documents to the collection, but if your goal is to
restrict who can read documents from the collection you'll need to
manage it with the individual document permissions.

Wayne.


On Fri, 2010-01-22 at 06:07 -0800, Geert Josten wrote: 

> Hi Judie,
> 
> You can use the MarkLogic Server Admin interface to create a so-called 
> protected collection. Add it to the list of protected collections by going to 
> Security and then Collections. Hit the Create tab, enter the uri of the 
> collection you want to protect and add Role1 with the appropriate privileges 
> (read, insert, update, execute)..
> 
> Kind regards,
> Geert
> 
> >
> 
> 
> Drs. G.P.H. Josten
> Consultant
> 
> 
> http://www.daidalos.nl/
> Daidalos BV
> Source of Innovation
> Hoekeindsehof 1-4
> 2665 JZ Bleiswijk
> Tel.: +31 (0) 10 850 1200
> Fax: +31 (0) 10 850 1199
> http://www.daidalos.nl/
> KvK 27164984
> De informatie - verzonden in of met dit emailbericht - is afkomstig van 
> Daidalos BV en is uitsluitend bestemd voor de geadresseerde. Indien u dit 
> bericht onbedoeld hebt ontvangen, verzoeken wij u het te verwijderen. Aan dit 
> bericht kunnen geen rechten worden ontleend.
> 
> 
> > From: general-boun...@developer.marklogic.com
> > [mailto:general-boun...@developer.marklogic.com] On Behalf Of
> > judie pearline
> > Sent: vrijdag 22 januari 2010 15:02
> > To: Mark Logic
> > Subject: [MarkLogic Dev General] Regarding roles and
> > permission in Marklogic
> >
> > Hi,
> >
> > Regarding the user security,
> >
> > I am having 2 roles Role1 and Role2. i just wanted to allow
> > only Role1 to access the collection Collection1. All the
> > other roles say Role2 and the roles that will be created in
> > future should not access Collection1.
> >
> > how we can achieve this in marklogic?
> >
> >
> > With Regards,
> > Judie
> >
> > 
> >
> > The INTERNET now has a personality. YOURS! See your Yahoo!
> > Homepage
> >  .
> >
> 
> ___
> General mailing list
> General@developer.marklogic.com
> http://xqzone.com/mailman/listinfo/general


___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] ALERTING - Accessing Rule information in the main module mapped to an Action

2009-12-11 Thread Wayne Feick
Hi Sunil,

When the alerting API invokes your module, it passes in external
variables with the information you're looking for.


http://developer.marklogic.com/pubs/4.1/apidocs/alerting.html#alert:make-action

When a rule associated with the action matches a document, the
action's module will be invoked with the following external
variables set: 


declare variable $alert:config-uri as xs:string external;
declare variable $alert:doc as node() external;
declare variable $alert:rule as element(alert:rule) external;
declare variable $alert:action as element(alert:action) external;


All actions must accept these external variables.

Wayne.



On Wed, 2009-12-09 at 23:55 -0800, Sunil Reddy wrote:
> Hi All,
> 
>  
> 
> I have been working on Alerting module for implementing RSS Feeds for
> saved search criteria.
> 
> My requirement is that, if some content enters the database that
> satisfies one/more saved search criteria ,  feed XMLs related to those
> one/more saved search criteria should be updated.
> 
> For this, I have created 
> 
>  
> 
> 1) One configuration 
> 
> 2) Multiple rules for multiple saved search criteria
> 
> 3) One action which invokes a main module that updates specified
> feed file.
> 
>  
> 
> Problem is with step 3, where I have to find out the rule which
> invoked this common action, there by executing a common update module.
> 
> Because this common update module has to update a feed xml file
> pertaining to the rule that got satisfied.
> 
>  
> 
> Kindly let me know if someone knows a way to access rule information
> in the main module which will be called by a common action module. 
> 
>  
> 
> Regards
> 
> Sunil Reddy
> 
>  
> 
>  
> 
> 
> 
> 
> 
> Disclaimer :-
> 
> "This communication contains information which is confidential and may also 
> be 
> legally privileged.It is for the exclusive use of the intended recipient/s.
> 
> If you are not the intended recipient of this mail, please delete it 
> immediately and notify the
> sender on +91 22 6660 6600 or by return e-mail confirming such deletion. Any 
> use, distribution, 
> disclosure or copying this electronic mail except by its intended recipient 
> may be unlawful.
> 
> Rave has scanned this email for viruses but does not accept any 
> responsibility once this email 
> has been transmitted."
> 
> 
> 
___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Unexpected results when querying for included elements

2009-11-18 Thread Wayne Feick
Do you have the XInclude pipeline configured? You might be matching both
the expanded and the unexpanded documents.


On Wed, 2009-11-18 at 16:39 -0800, Stewart Shelline wrote:
> I’m having trouble understanding the following behavior. Without
> putting the document under DLS, I have inserted include statements in
> a file that refer to external files as follows:
> 
>  
> 
> Book element snippet:
> 
> 
> 
> …
> 
> 
> 
> 
> …
> 
> 
> 
> 
> 
> Contents of 03990_000_1-ne_001.xml:
> 
> 
> 
> 
> 
> 
> …
> 
> 
> 
> …
> 
> 
> 
> 
> 
> When I perform the following query, I am getting duplicate results in
> which both the reference to the chapter and the document containing
> the actual chapter element are returned:
> 
>  
> 
> 
> 
> {
> 
> for $chapter in doc()//chapter[referenceHeader/scriptureID/@book =
> "1-ne"]
> 
> return { xdmp:node-uri( $chapter ) }
> 
> }
> 
> 
> 
>  
> 
>  
> 
> 
> 1-ne/03990_000_1-ne_001.xml
> 
> http://lds.org/shared/gl/scriptures/eng/bofm/1-ne/03990_000_1-ne_001.xml
> 1-ne/03990_000_1-ne_002.xml
> 
> http://lds.org/shared/gl/scriptures/eng/bofm/1-ne/03990_000_1-ne_002.xml
> 
> …
> 
> 
> 
>  
> 
> I would have expected the query to return only actual chapter
> elements, not the references. Am I mis-using or misunderstanding
> include statements?
> 
>  
> 
>  
> 
> 
> 
> 
> NOTICE: This email message is for the sole use of the intended
> recipient(s) and may contain confidential and privileged information.
> Any unauthorized review, use, disclosure or distribution is
> prohibited. If you are not the intended recipient, please contact the
> sender by reply email and destroy all copies of the original message.
> 
> 
> 
___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] xdmp:post

2009-11-16 Thread Wayne Feick
Are you familiar with a packet capture program like wireshark on linux?
I typically use that to diagnose issues like this.

Does your java server set a content length for its response? If not, our
code will keep reading until the socket is closed. Is your Java code
closing the stream when it's done writing?

Wayne.


On Mon, 2009-11-16 at 13:58 -0800, Lee, David wrote:
> I’m experimenting with the http commands (my goal is to be able to run
> some sub-service which ML doesnt provide and return the results).
> 
>  
> 
> I got http-get to work great
> 
> Now I’m working on http:post
> 
>  
> 
> I have the following code
> 
>  
> 
> xdmp:http-post( 'http://localhost:1/test', 
> 
> some data)
> 
>  
> 
>  
> 
> I have a very simple Java app using the class
> com.sun.net.httpserver.HttpExchange
> 
> This works great with other apps that I use, but for MarkLogic I’m
> getting a hang.
> 
> The data is making it to the app,  I’m reading the input stream (body
> of post), and just echoing it back out.
> 
> Then closing all the streams and closing the app.
> 
> But the ML side just hangs.   If I “stop” the http server then ML
> picks up and displays the results just fine.
> 
> Its acting like it was either hanging on reading the results, or
> hanging on posting the results, not sure which and no idea where to
> look.
> 
> This may be a Java problem not a ML problem, but I have other code
> doing Post to this same exact server and they dont have this problem,
> 
> so maybe there’s something on the ML side I can tweek.
> 
>  
> 
> Any ideas on things to try ?
> 
>  
> 
> Thanks for any suggestions.
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
> 
> 
> David A. Lee
> 
> Senior Principal Software Engineer
> 
> Epocrates, Inc.
> 
> d...@epocrates.com
> 
> 812-482-5224
> 
>  
> 
>  
> 
> 
___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Question about TaskServer and Threads

2009-11-10 Thread Wayne Feick
1) You could build a cron-like service by describing each
when/condition/task combination in a document and then having a
scheduled task that looks for these documents, runs their conditions,
and spawns the task if the condition is met. There are a few ways you
could design something like this, but the key bit is to use a scheduled
task to see which things should be run. Serialized queries and
reverse-query might come in handy as well.

2) yes


On Tue, 2009-11-10 at 12:43 -0800, Darin McBeath wrote:
> Assume that I have spawned a task to the TaskServer.
> 
> The spawned task checks for a 'condition'  If this 'condition' is
> true(), then the task will complete.
> 
> If the 'condition' is false(), the task will sleep for a specified
> interval (such as 15 seconds) and then will spawn a task to do
> basically the same check.  This process will continue until the
> 'condition' evaluates to true.
> 
> Couple of questions.
> 
> 1. Is there a better way of accomplishing the goal outlined above?
> The above appears to work but was curious if others had a better idea.
> 2. When you use xdmp:sleep within a task on the Task Server, are you
> still consuming one of the threads which are configured for the Task
> Server during this period of sleep time?  I wouldn't want to run into
> a situation where all of my configured threads are doing this
> 'condition' check instead of the real work.
> 
> Thanks.
> 
> Darin.
> 
___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Can't post JPG files...

2009-11-02 Thread Wayne Feick
Here's the important bits of the HTML form from one of our apps, which
looks to be the same as yours except for the xmlns="" bit.


  
  


Here is how it gets inserted into the database, which again looks very
much like what you're doing.

xdmp:document-insert(
  $furi,
  xdmp:get-request-field("docFile"),
  $perms)

In the html you provided, I don't see a space before method="post" but
I'm assuming that's just a copy/paste error.

What behavior are you seeing?

Does it work if you simplify the "$body := ..." line down to just
calling xdmp:get-request-field()?

Wayne.


On Mon, 2009-11-02 at 14:40 -0800, Keith L. Breinholt wrote:
> What is the correct way of receiving a ‘POST’ed file?
> 
>  
> 
> With the following HTML form and post-any.xqy code pass/post file of
> most any type except JPG.
> 
>  
> 
> html xmlns="http://www.w3.org/1999/xhtml";>
> 
>  enctype="multipart/form-data">
> Fileto upload: 
> 
> 
> 
>  
> 
> 
> 
> Contents of post-any.xqy
> 
>  
> 
> let $filename:= xdmp:get-request-field-filename( "upload" )
> let $type := xdmp:get-request-field-content-type( "upload" )
> let $body := 
>   if ( xdmp:get-request-method() eq "POST" or $typeeq
> "application/x-www-form-urlencoded" ) 
>   then xdmp:get-request-field("upload") 
>   else xdmp:get-request-body( xdmp:uri-format( $filename ) )
> let $uri := concat( "/test/", $filename )
> return xdmp:document-insert( $uri,$body )
> 
>  
> 
> What is the correct way of sending and receiving binary files?
> 
>  
> 
> Thanks,
> 
>  
> 
> Keith L. Breinholt
> 
> ICS Content & Media
> 
> breinhol...@ldschurch.org
> 
> "Do what you can, with what you have, where you are." Theodore
> Roosevelt
> 
>  
> 
> 
> 
> 
> NOTICE: This email message is for the sole use of the intended
> recipient(s) and may contain confidential and privileged information.
> Any unauthorized review, use, disclosure or distribution is
> prohibited. If you are not the intended recipient, please contact the
> sender by reply email and destroy all copies of the original message.
> 
> 
> 
___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Error while creating alert - triggers

2009-10-26 Thread Wayne Feick
Judie,

The error indicates that the trigger is not associated with any alerting
configuration. Here is some example code showing creation of create and
modify triggers. The last three lines are key to associate the triggers
with your alerting configuration.

xquery version "1.0-ml"; 

import module namespace alert = "http://marklogic.com/xdmp/alert"; 
  at "/MarkLogic/alert.xqy";

let $uri := "http://acme.com/alert/message-board";
let $trigger-ids :=
  alert:create-triggers (
$uri,
trgr:trigger-data-event(
  trgr:directory-scope("/", "infinity"),
  trgr:document-content(("create", "modify")),
  trgr:pre-commit(
let $config := alert:config-get($uri)
let $new-config := alert:config-set-trigger-ids($config, $trigger-ids)
return alert:config-insert($new-config)

Also note that by using a post-commit trigger, the document is already
gone from the database and you won't be able to match any alerting rules
to it. You may want to try a precommit trigger.

Wayne.


On Mon, 2009-10-26 at 05:12 -0700, judie pearline wrote:

> Hi all,
>  
> I have created an alert trigger for a document delete. when i tried to
> delete a document its throwing some error in the log.
>  
> Since the trigger was set to call at post commit, the document is
> getting deleted but the action was not occured.
>  
> Error Log:
> 
> ALERT-TRIGGER (err:FOER): unknown trigger
>  in /MarkLogic/alert/trigger.xqy, on line 22,
>  in alert:do-trigger("/hamlet.doc",  xmlns:trgr="http://marklogic.com/xdmp/triggers";>17475202855453881487)
>  [1.0-ml]
> $uri = "/hamlet.doc"
> $trigger =  xmlns:trgr="http://marklogic.com/xdmp/triggers";>17475202855453881487
>  $doc = ()
>  $alert-uris = ()
>  $when = text{"post-commit"}
>  in /MarkLogic/alert/trigger.xqy, on line 39 [1.0-ml]
>  
> Please suggest.
>  
> Regards,
> Judie
> 
> 
> __
> Connect more, do more and share more with Yahoo! India Mail. Learn
> more.
___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Alert- trigger problem

2009-10-26 Thread Wayne Feick
Dhivya,

There is a sample application included with the distribution, mentioned
at the end of the section on alerting in the search developers guide:

http://developer.marklogic.com/pubs/4.1/books/search-dev-guide.pdf

Did you verify that your triggers are really being created in your
triggers database?
Did anything show up in your server's error log?

Wayne.


On Sun, 2009-10-25 at 09:07 -0700, dhivya vijayakumar wrote:

> Hi team,
> 
> 
> 
> We are exploring the Alert function of ML and we created the sample
> alert using the following code. please check the code below.
> we created the action, rule alert config and one trigger to call the
> alert. but when we load the document to the database the trigger is
> not fired. can you check whats wrong in the below code. if possible
> can some body share sample code.
> 
> 
> with regards,
> Dhivya.v
> 
> 
> CODE WE CREATED.
> Alert Config creation:
> 
> (: run this a user with the alert-admin role :)
> 
> xquery version "1.0-ml";
> 
> import module namespace alert = "http://marklogic.com/xdmp/alert";
> 
> at "/MarkLogic/alert.xqy";
> 
> let $config := alert:make-config(
> 
> "my-alert-config-uri",
> 
> "My Alerting App",
> 
> "Alerting config for my app",
> 
>  )
> 
> return
> 
> alert:config-insert($config)
> 
>  
> 
> Alerting in the log file:
> 
> xdmp:log(fn:concat(xdmp:get-current-user(), " was alerted"))
> 
>  
> 
> Alert Action creationL
> 
> xquery version "1.0-ml";
> 
> import module namespace alert = "http://marklogic.com/xdmp/alert";
> 
> at "/MarkLogic/alert.xqy";
> 
> let $action := alert:make-action(
> 
> "xdmp:log",
> 
> "log to ErrorLog.txt",
> 
> xdmp:modules-database(),
> 
> xdmp:modules-root(),
> 
> "/alert-action.xqy",
> 
> put anything here )
> 
> return
> 
> alert:action-insert("my-alert-config-uri", $action)
> 
>  
> 
>  
> 
> Alert Rule creation:
> 
> xquery version "1.0-ml";
> 
> import module namespace alert = "http://marklogic.com/xdmp/alert";
> 
> at "/MarkLogic/alert.xqy";
> 
> let $rule := alert:make-rule(
> 
> "simple",
> 
> "hello world rule",
> 
> 0, (: equivalent to xdmp:user(xdmp:get-current-user()) :)
> 
> cts:word-query("hello world"),
> 
> "xdmp:log",
> 
>  )
> 
> return
> 
> alert:rule-insert("my-alert-config-uri", $rule)
> 
>  
> 
>  
> 
> Alert Trigger creation:
> 
> xquery version "1.0-ml";
> 
>   import module namespace alert = "http://marklogic.com/xdmp/alert"; 
> 
>  at "/MarkLogic/alert.xqy";
> 
>  
> 
>   let $uri := " my-alert-config-uri "
> 
>   let $trigger-ids :=
> 
> alert:create-triggers (
> 
> $uri,
> 
>trgr:trigger-data-event(
> 
>trgr:directory-scope("/", "infinity"),
> 
> trgr:document-content(("create", "modify")),
> 
>trgr:pre-commit(
> 
>   let $config := alert:config-get($uri)
> 
>   let $new-config := alert:config-set-trigger-ids($config,
> $trigger-ids)
> 
>   return alert:config-insert($new-config)
> 
>  
> 
> 
___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Re: Saving a document over XCC?

2009-10-07 Thread Wayne Feick
Jeroen,

If you push the xdmp:security-assert() into a separate function called
form your amp'd function, you'll get your expected behavior (assuming
your amp is configured appropriately).

The xdmp:security-assert() call does not take into consideration any
amp'd roles on the immediate function, only those that were in effect
prior to calling the function. 

Wayne.


On Wed, 2009-10-07 at 05:01 -0700, Jeroen Pulles wrote:

> Hi,
> 
> I want to use an amp to get to the role names for the role id's on the
> document permissions. So I add my user's role to the get-role-names
> amp.
> 
> How come I still get a privilege exception for this user?
> 
> My understanding of amps is that once a role has the amp token for a
> function, that role has root powers that include any privilege inside
> the function body.
> 
> SEC-PRIV: 
> xdmp:security-assert("http://marklogic.com/xdmp/privileges/get-role-names";,
> "execute") -- Need privilege:
> http://marklogic.com/xdmp/privileges/get-role-names
> 
> in /MarkLogic/security.xqy, on line 707
> expr: 
> xdmp:security-assert("http://marklogic.com/xdmp/privileges/get-role-names";,
> "execute"),
> 
> in sec:get-role-names(xs:unsignedLong("5500450759246938400"))
> in /content/save_check_role-names.xqy, on line 9
> 
> regards,
> Jeroen
> 
___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] Reg: version update

2009-09-29 Thread Wayne Feick
Hi Ashwini,

The library services API requires that you insert a complete document,
and does not allow you to use the node manipulation functions such as
xdmp:node-insert-child().

Wayne.


On Tue, 2009-09-29 at 06:47 -0700, Ashwini wrote:

> Hi All,
> 
> I included  mainting versions of document in my project.
> intially i am keeping under version management.
> i am facing problem in node-update.
> i want to insert a child node in the documnet, using
> "xdmp:node-insert-child()" .
> how i can achive versions of updated nodes in a document.
> how to use xdmp:node-insert-child() as part of dls:document-update().
> 
> thank you one and all for sparing time to find answer, in advance.
> 
> 
> regards,
> Ashwini
> ___
> General mailing list
> General@developer.marklogic.com
> http://xqzone.com/mailman/listinfo/general
___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] general serialization of cts

2009-09-24 Thread Wayne Feick
All of the cts queries serialize/deserialize as a feature of the server.
Placing a query into an element turns it into its xml representation,
and the cts:query() constructor turns the xml representation back into a
query.

let $cts-e  as cts:query := cts:word-query("tilt", ("lang=en"), 1)
let $cts-e-q := {$cts-e}
let $cts-e2  := cts:query($cts-e-q/element())

return ($cts-e2, xdmp:describe($cts-e2))




On Thu, 2009-09-24 at 13:28 -0700, Paul M wrote:

> let $dc as element() := 123
> let $dc-q := {xdmp:quote($c)}
> let $dc-qt := $dc-q/text()
> let $dc-uq  as element():= xdmp:unquote($dc-qt,"","format-xml")/node()
> 
> let $cts-e  as cts:query := cts:word-query("tilt", ("lang=en"), 1)
> let $cts-e-q := {xdmp:quote($cts-e)}
> let $cts-e-qt := $cts-e-q/text()
> let $dc-uq  as cts:query:=
> xdmp:unquote($cts-e-qt,"","format-xml")/node() cast as cts:query
> (:this does not work:)
> 
> 
> return $cts-e-q
> 
> The $dc-uq stmt does not work. Cast exception. I am attempting to
> create general method to serialize any cts:query without using lib
> parser. Even possible? xdmp:quote works on any item,correct?
> 
> 
> 
___
General mailing list
General@developer.marklogic.com
http://xqzone.com/mailman/listinfo/general


Re: [MarkLogic Dev General] dls:document-manage doesn't work for non-xml files?

2009-09-10 Thread Wayne Feick
It does work on non-XML files. The error indicates that the document
"/myDirectory/sample_png_versions/1-sample.png" does not exist. Did you
invoke this module before completing the transaction that inserted the
document? 

Wayne.



On Thu, 2009-09-10 at 14:45 -0700, Sundar Iyer wrote:
> I'm evaluating ML 4.0-1 and found that dls:document-manage doesn't
> work for non-xml objects:
>  
> The XQuery Code that I tried is as below;
> xquery version "1.0-ml";
> 
> import module namespace dls = "http://marklogic.com/xdmp/dls"; 
> 
> at "/MarkLogic/dls.xqy";
> 
> declare variable $uri as xs:string external;
> 
> dls:document-manage($uri, 
> 
> fn:false(),
> 
> "Initial version"
> 
> )
> 
> 
>  
>  
> The error message is as below:
> com.marklogic.xcc.exceptions.XQueryException
> 
> : XDMP-DOCNOTFOUND:
> 
> xdmp:document-set-property("/myDirectory/sample_png_versions/1-sample.png", 
>  xmlns:dls="http://marklogic.com/xdmp/dls";> xmlns:sec="http://marklogic.com/xdmp/security";><...) 
> -- Document not found 
> 
> in /MarkLogic/dls.xqy, on line 1396
> 
> expr:
> 
> xdmp:document-set-property("/myDirectory/sample_png_versions/1-sample.png", 
>  xmlns:dls="http://marklogic.com/xdmp/dls";> xmlns:sec="http://marklogic.com/xdmp/security";><...),
> 
> in
> 
> dls-document-change-properties("/myDirectory/sample_png_versions/1-sample.png",
>  ( xmlns:dls="http://marklogic.com/xdmp/dls";> xmlns:sec="http://marklogic.com/xdmp/security";><..., 
>  xmlns:dls="http://marklogic.com/xdmp/dls";>1/myDirectory...),
>  2)
> 
> in /MarkLogic/dls.xqy, on line 1146
> 
> expr:
> 
> xdmp:document-set-property("/myDirectory/sample_png_versions/1-sample.png", 
>  xmlns:dls="http://marklogic.com/xdmp/dls";> xmlns:sec="http://marklogic.com/xdmp/security";><...),
> 
> in document-insert-version("/myDirectory/sample.png",
> fn:doc("/myDirectory/sample.png"), "Initial version",
> ( 
> xmlns:sec="http://marklogic.com/xdmp/security";>update128592700530...,
>   xmlns:sec="http://marklogic.com/xdmp/security";>insert128592700530...,
>   xmlns:sec="http://marklogic.com/xdmp/security";>read12859270053073...,
>  ...), (), 0, xs:unsignedLong("2015473912904977339"), fn:true())
> 
> in /MarkLogic/dls.xqy, on line 1280
> 
> expr:
> 
> xdmp:document-set-property("/myDirectory/sample_png_versions/1-sample.png", 
>  xmlns:dls="http://marklogic.com/xdmp/dls";> xmlns:sec="http://marklogic.com/xdmp/security";><...),
> 
> in _document-manage("/myDirectory/sample.png", fn:false(),
> "Initial version", fn:false(), ())
> 
> in /MarkLogic/dls.xqy, on line 144
> 
> expr:
> 
> xdmp:document-set-property("/myDirectory/sample_png_versions/1-sample.png", 
>  xmlns:dls="http://marklogic.com/xdmp/dls";> xmlns:sec="http://marklogic.com/xdmp/security";><...),
> 
> in dls:document-manage("/myDirectory/sample.png", false(),
> "Initial version")
> 
> in /eval, on line 5
> 
> expr:
> 
> xdmp:document-set-property("/myDirectory/sample_png_versions/1-sample.png", 
>  xmlns:dls="http://marklogic.com/xdmp/dls";> xmlns:sec="http://marklogic.com/xdmp/security";><...)
> 
> at
> com.marklogic.xcc.impl.handlers.ServerExceptionHandler.handleResponse(
> 
> ServerExceptionHandler.java:31) 
> 
> at
> com.marklogic.xcc.impl.handlers.EvalRequestController.serverDialog(
> 
> EvalRequestController.java:68) 
> 
> at
> com.marklogic.xcc.impl.handlers.AbstractRequestController.runRequest(
> 
> AbstractRequestController.java:72) 
> 
> at com.marklogic.xcc.impl.SessionImpl.submitRequest(
> 
> SessionImpl.java:280) 
> 
> at com.cengage.marklogic.upload.InsertAndManage.xml(
> 
> InsertAndManage.java:68) 
> 
> at com.cengage.cms.server.upload.uploadfile(
> 
> upload.java:51) 
> 
> at com.cengage.cms.server.upload.processRequest(
> 
> upload.java:35) 
> 
> at com.cengage.cms.server.upload.doPost(
> 
> upload.java:116) 
> 
> at javax.servlet.http.HttpServlet.service(
> 
> HttpServlet.java:637) 
> 
> at javax.servlet.http.HttpServlet.service(
> 
> HttpServlet.java:717) 
> 
> at org.mortbay.jetty.servlet.ServletHolder.handle(
> 
> ServletHolder.java:487) 
> 
> at org.mortbay.jetty.servlet.ServletHandler.handle(
> 
> ServletHandler.java:36

  1   2   >