I had this dilemma about 3 years ago with my energy calculation software for
architects which was standalone. I tried a few combo's such as using cloud
mysql with 4D standalone etc which works ok. In the end I went with browser
and lightning and 4D. It depends how dependent you are on 4D UI feature
I see only 2 alternatives there, either append text to a blob directly, or
append to a text variable and then move it to a blob.
I ran a quick test using both alternatives, and a series of TEXT TO BLOB is
more than 10x faster then appending to a text variable and then moving it to a
blob.
Code
Many thanks Chip
This is a really simple solution and works like a dream!
Regards
Michael Jarosz
--
View this message in context:
http://4d.1045681.n5.nabble.com/Restricting-a-query-tp5753437p5753463.html
Sent from the 4D Tech mailing list archive at Nabble.com.
Hi Keith,
I suppose it could be argued that ST Get Plain Text gets plain (i.e.,
unstyled) text ... text in its raw form ... but I agree with your point of
view.
Pat
On 31 July 2017 at 15:09, Keith Culotta via 4D_Tech <4d_tech@lists.4d.com>
wrote:
> This is something to be aware of if you plan to
Hi David,
Try this method :
//
//@4ddoc-start : en
//@name : BLB_appendText
//@scope : public
//@deprecated : no
//@description : This function will append some text to a blob
//@parameter[1-INOUT-blobP
I use this model too with a C_OBJECT and ARRAY BLOB. Element 0 is the JSON
manifest; any types not easily represented in JSON are other elements in the
blob array. This makes it easy to pack all 4D data types into a single unit. I
use it to may remote procedure calls from one 4D version to anoth
Well, It looks like this has been declared standard behavior. I can't say it's
unexpected to see the words "style" and "cosmetic" in the same vicinity. I
question their use with a text "transformation". This is more about user
expectations. "a" and "A" have always been understood, by technic
care to expand on this?
I do not understand this part:
> arbitrary data are referenced by a number in the JSON,
> which is the element number for the BLOB array.
Thanks
On Tue, 1 Aug 2017 07:54:15 -0500, John DeSoi via 4D_Tech wrote:
> I use this model too with a C_OBJECT and ARRAY BLOB. Element
Jarosz
Actually you don’t need to build your own query editor. All you actually need
to do is make sure you wrap any calls to the query editor(and to any query in
general) so you can apply to any query a client filter
> On 31 Jul 2017, at 19:38, Kirk Brooks via 4D_Tech <4d_tech@lists.4d.com>
Thanks for the help everyone, you're very kind. Bruno and Julio, thanks for
posting detailed code - I'll steal shamelessly. Julio, thanks for going to
the trouble of testing this out...the speed difference you found is pretty
substantial.
Hi Milan,
Thanks for your mail. Could you please elaborate more about the significance
of the number of VMs on the same network interface?
I’m totally new to Windows Server 2016, so am quite clueless as to its
requirements….
Also, say given a datafile file size of about 6GB, with about 30 user
Hi Paul,
Do you use a web account of your own to serve it, or do you use some sort of
"cloud" setup through somebody like AWS, IBM, etc? Do you have an idea of the
transaction volume, and how often do you need to check db integrity?
Thank you,
Don
>I had this dilemma about 3 years ago with my
For example, if you want to represent any type of 4D variable in an object,
each property ($oValue below) is an object with the (1) the type, and (2) the
value OR the index of the value in the blob array. So if I pass a pointer to
store a text property, it might be simply
OB SET($oValue;"type";
Lots of good answers, thanks for another one John.
I'm starting with standard rows with UUIDs, strings, longs, reals, and
perhaps text. I need to go for maximum speed...most of the work on tuning
is on the Postgres side. Their high-speed entry command is called COPY IN
and Rob's plug-in supports i
Keith,
If you used ST Get Text instead of Get Plain Text, then you maybe could
strip out the style tags and end up with the correct plain text? (I haven't
tried this).
Pat
On 1 August 2017 at 15:08, Keith Culotta via 4D_Tech <4d_tech@lists.4d.com>
wrote:
> Well, It looks like this has been decla
If you want the best performance and don't have a requirement to use the
plugin, I'd bet the fastest way to bulk load with Postgres is going to be using
files. If you are on the same machine, you can COPY directly from a file if you
have superuser access. If not, you can still launch psql and us
i'm confused. he's getting the right plain text. isn't he saying that if there
were an emphatic style that displayed . as ! then the plain text ought to
change to ! too ?
> On Aug 1, 2017, at 1:05 PM, Pat Bensky via 4D_Tech <4d_tech@lists.4d.com>
> wrote:
>
> Keith,
> If you used ST Get Text
Pat,
ST Get Text shows the tag in a regular text area:
style="text-transform:uppercase", so maybe there is a way to strip the tags and
then put the text back into a styled object. I can see someone writing a
utility one day for a customer who needs thousands of documents fixed.
Thanks,
Keith
On Aug 1, 2017, at 3:42 PM, David Adams via 4D_Tech <4d_tech@lists.4d.com>
wrote:
>
> I'm starting with standard rows with UUIDs, strings, longs, reals, and
> perhaps text. I need to go for maximum speed...most of the work on tuning
> is on the Postgres side. Their high-speed entry command is cal
John,
Great idea - I was planning on trying out files after going with the
plug-in. I hadn't thought of your idea of parallelizing the task via files,
that's genius. Unfortunately, I think that these will be cooperative
threads as both plug-ins and LEP aren't accessible from a pre-emptive
thread.
Spence,
As per Write Pro standard behavior, I am getting the right text. But, this
standard behavior seems more like "What you see is what you don't get". You
see "UPPERCASE TEXT". Hand the data off to another program and you can get
"Uppercase Text" or "UPPErcase Text or "upperCASE TExt"...
Jim,
Thanks for the suggestions, they're excellent.
Partly, I'm using this current task as an opportunity to dig into the
subject of bulk imports in Postgres. So far, I've figured out:
* Prepared statements don't deliver any real speed gain.
* Yes, bundling operations into transactions with exp
>
> COPY is supposed to be many times faster than even multi-row inserts, this
> StackOverflow answer has a short explanation:
>
> https://stackoverflow.com/a/32045034/980575
For the sake of the archives, I thought I'd post a couple of more links on
this subject to augment Jim's link:
Depesz see
On Aug 1, 2017, at 4:55 PM, David Adams via 4D_Tech <4d_tech@lists.4d.com>
wrote:
>
> Great idea - I was planning on trying out files after going with the
> plug-in. I hadn't thought of your idea of parallelizing the task via files,
> that's genius. Unfortunately, I think that these will be coope
Jim,
Awesome, I'll check out that SO thread. Back to your idea, you can't use
LEP from within a pre-emptive process, it just isn't supported. But given a
large enough task (one worth parallelizing), it's not hard to imagine the
cost of dividing work and marshaling results could pay for itself:
Co
While converting an application from 2004, it appears that Quick Reports in
2004 are not compatible with 15.4.
I've searched through a few sets of docs but can't find a where this is
documented. I'd appreciate it if some good soul could confirm that/cite a
reference for that.
Assuming that they
Awesome, thanks!
On Tue, Aug 1, 2017 at 17:04 John DeSoi wrote:
> David,
>
> You just need LEP at the end after the file is built, so you can just
> launch that in another process that returns immediately. On the Mac I use
> something like the line below to create an executable file and launch i
27 matches
Mail list logo