Got it. Thank you very much!
Ling
Wayne Feick wrote:
> I'm not sure if there are any docs available outside the company.
>
> Generally speaking, it's what you'd expect in a transactional system;
> recovery records are written to the journal as a transaction progresses,
> and important records
I'm not sure if there are any docs available outside the company.
Generally speaking, it's what you'd expect in a transactional system;
recovery records are written to the journal as a transaction progresses,
and important records (prepare, commit) cause journal data to be synced
to disk and wa
Hello Wayne,
Thank you for your reply. Yeah, I found it. They are together with data,
on our striped volume. I am interested in how they are managed, like
when to write log and when the I/O happens. Could you point out some
materials that I can read about? Thanks!
Thanks,
Ling
Wayne Feick wro
Hi Ling,
Yes, we maintain a transaction journal for each forest that allows us to
recover committed transactions in the event of a failure. In each
forest's Journals directory you'll files named Journal# where # is an
integer. They are not human readable.
Wayne.
On 06/22/2010 10:24 PM, Ling
Hello,
When I look at the logs, I only found OS logs and ML server file logs.
These log told what ML server did. In traditional database, there are
redo/undo logs when database writes. Does ML server writes such logs?
Where is it and when does the server write such logs and flush it to
disk? I
Thats great !
( Nothing like avoiding exponential behavior)
To avoid deep-equals you could add an attribute with an md5 of the
serialized form of the node.
This then becomes a single value check instead of deep-equal.
-Original Message-
From: general-boun...@developer.marklogic.com
Thanks a lot for your inputs
My performance improved from 48 minute to 6.84 Seconds.
I still need to think about deep-equal alternative.
Regards,
Utsav Joshi
-Original Message-
From: general-boun...@developer.marklogic.com
[mailto:general-boun...@developer.marklogic.com] On Behalf Of Gee
Actually, the situation is a little more subtle than that. 1.0-ml
recognizes certain
HTML entities in the *XQuery* parser. The XML parser operates under the
same
rules in both places.
So if foo.xml contains
©
xdmp:document-get("foo.xml")
will throw the invalid entity error in either dialec
Hi Keith,
I think in the 1.0-ml dialect that MarkLogic automatically recognizes the html
entities. In the 1.0 (strict) dialect, you would need to both add an entity
reference and escape the & in your xquery. Something like (using 4.1-6):
xquery version "1.0-ml";
declare namespace xdmp="http:/
The range lexicons (cts:element-value-ranges and
cts:element-attribute-value-ranges) were added in 4.0, and the Search API
(which uses these) was added in 4.1.
Sounds like a great reason to upgrade
-Danny
From: general-boun...@developer.marklogic.com
[mailto:general-boun...@developer.mark
Is this bucket search availalble or supported in ML ver 3.2 ?
**
You might want to start with the Buckets Example in Section 2.7.1 of
the Search Developers Guide
(http://developer.marklogic.com/pubs/4.1/books/search-dev-guide.pdf).
SS
From: general-
On the MarkLogic web site, there is a statement:
MarkLogic Server can analyze all of your content to produce
a “tag cloud” that shows the most used tags
I would like a list of all the tags (ideally with their frequency) of
all the tags in my database. How do I do this?
-Dave
I think it's because they're not valid XML entities. Look at xdmp:tidy()
From: general-boun...@developer.marklogic.com
[mailto:general-boun...@developer.marklogic.com] On Behalf Of Keith L.
Breinholt
Sent: 22 June 2010 17:30
To: General Mark Logic Developer Discussion
Subject: [MarkLogic Dev
Keith L. Breinholt wrote:
Hi,
> I’ve looked through the document and it is referring to the
> © html entity. However, this is a valid HTML entity, so
> why is it throwing this exception
But XML is not HTML. And those entities are not "pre-declared"
in any way. So I expect that to fail. B
I'm calling xdmp:unquote() on some content sent to a page for storage.
However, on calling xdmp:unquote() with "repair-none" option it is throwing an
exception:
XDMP-DOCENTITYREF: xdmp:unquote("
mailto:breinhol...@ldschurch.org>
"Do what you can, with what you have, where you are." Theodore Ro
I would like to hear the results of this.
As Geert says your original code is exponential, and its fairly easy to
turn it into linear.
But even linear, if your talking about 100,000 elements I don't believe
you'll be able to do this in sub-second time.
That would mean each for() iteration could onl
Hi Joshi,
Some observations from first glance. Don't loop over both old and new, but only
new, and only grab the appropriate mid element from old using a match on
element a. That will eliminate the exponential order. You can use cts functions
to guarantee you are using indexes to get the approp
17 matches
Mail list logo