Hi,
So i'm experimenting with the conversion option in MarkLogic (v5.0).
CPF is installed and enabled, conversion is set to true.
Import of docx and pptx is via WebDAV.
However, conversion visibly doesn't take place.
I set logging to finest, so I see lots of skipped lines but no
outright
Conversion is currently for Office 2003 documents and earlier.
With 2007/2010 we work with the XML directly. The Office Open XML Extract
pipeline will unzip the .docx and .pptx, and create the *_parts directory
containing their XML components.
Hope this helps,
Pete
-Original Message-
Pete,
thanks that's good to know, and it resolved my problem.
I attached the Office Open XML Extract pipeline to the Default
Documents domain to which it is now attached like these pipelines:
Conversion Processing
Conversion Processing (Basic)
DocBook Conversion
HTML Conversion
MS Office
Would entertaining hack of formula attribute be an option?
=thisCHAR(10)that
CHAR(13)?
- Original Message -
From: general-requ...@developer.marklogic.com
general-requ...@developer.marklogic.com
To: general@developer.marklogic.com
Cc:
Sent: Wednesday, February 22, 2012 9:48 AM
Hello again,
I've installed successfully the Word add-in and am able to search
using the sample provided in the download.
However, the double-click on a found paragraph does not insert it into
the currently open word document. Probably, things have changed from
2007 to 2010. Looking at the
Hi Jakob,
1) this add-in is supported for Word 2010
Though the Addin will install with Office 2010; the XQuery API with the Toolkit
that is currently available on the Community site is only compatible with the
2007 flavor of WordprocessingML.
The TK has been updated for 2010 support and is
Thanks Pete,
That's extra quick! :)
I got the zip. and am updating the msi as we speak.
cheers,
Jakob.
On Wed, Feb 22, 2012 at 17:45, Pete Aven pete.a...@marklogic.com wrote:
Hi Jakob,
1) this add-in is supported for Word 2010
Though the Addin will install with Office 2010; the XQuery API
We are testing some of our document ingestion routines, using JUnit and
some other libraries, and we need to understand what the best practice is
for dealing with the finite amount of time (often tens of seconds) required
to ingest successfully, and thus how to somehow delay our assertions (that
I don't understand. If you send an insert via XCC, the XCC call does not return
until the insert is complete. If you POST or PUT to an HTTP service, the HTTP
call does not return until the insert is complete. Why not simply run your test
after the insert call or service call returns?
Or is CPF
xdmp:elapsed-time() is reporting some confusing numbers from my Search API
snippet extension module. I have a function (actually several) that I would
like to time so I can target the slower parts for optimization. I am logging
the times, and they look like this:
2012-02-22 15:45:11.629 Info:
Hi Will,
xdmp:elapsed-time measures the overall elapsed time for the request, at the
instant it is called.
Inside Search API, at the start of processing the specific search we record
$t-minus-zero then subtract that from the elapsed-time at the end to produce
the total-time value. This is to
Michah,
Ah, I think so. Then would subtracting a finished elapsed-time from a start
elapsed time be a reasonable way to calculate an individual function call's
time? For example:
declare function s:do-snippet(
$result as node(),
$ctsquery as schema-element(cts:query),
$options as
Yes, that will work great. The profiler in Query Console is also very helpful.
Thanks, -m
On Feb 22, 2012, at 4:58 PM, Will Thompson wrote:
Michah,
Ah, I think so. Then would subtracting a finished elapsed-time from a start
elapsed time be a reasonable way to calculate an individual
13 matches
Mail list logo