Re: Another Revolution Success Story

2004-10-17 Thread Alex Tweedly
Reviving an old thread from a few months ago 
At 15:02 01/07/2004 -0700, Richard Gaskin wrote:
If CSV were consistently implemented CSV2TabNew would work excellently 
right out of the box, but since some CSVs escape quotes by doubling them I 
needed to add one line (see below) to also substitute doubled quote chars 
with the quote placeholder.

Bonus: since the added line reduces the number of quote characters, the 
function is now even faster.

Here's CSV2Tab3:
function CSV2Tab3 pData
  local tNuData -- contains tabbed copy of data
  local tReturnPlaceholder -- replaces cr in field data to avoid line
  --   breaks which would be misread as records;
  --   replaced later during dislay
  local tEscapedQuotePlaceholder -- used for keeping track of quotes
  --   in data
  local tInQuotedText -- flag set while reading data between quotes
  --
  put numtochar(11) into tReturnPlaceholder -- vertical tab as
  --   placeholder
  put numtochar(2)  into tEscapedQuotePlaceholder -- used to simplify
  --   distinction between quotes in data and those
  --   used in delimiters
  --
  -- Normalize line endings:
  replace crlf with cr in pData  -- Win to UNIX
  replace numtochar(13) with cr in pData -- Mac to UNIX
  --
  -- Put placeholder in escaped quote (non-delimiter) chars:
  replace ("\""e) with tEscapedQuotePlaceholder in pData
  replace quote"e with tEscapedQuotePlaceholder in pData --Unfortunately, there's a problem with this code; the heart of it is
  split pData by quote
  repeat for each element k in pData
 -- build up new string
  end repeat
and this is not guaranteed to work. The form "repeat for each element" will 
process the elements in the order of the keys of the array. Normally this 
is the correct order (because split by only a primary separator produces an 
array whose keys are consecutive integers), but there is no guarantee that 
they will always be.

And I've found at least one case where they're not - I have a spreadsheet 
which works just fine up to 3904 lines - but add one more line and it fails 
completely.
(verified by "put the keys of pData after msg")

Changing
   repeat for each element k in pData
to
  repeat with tCounter = 1 to the number of lines in the keys of pData
put pData[tCounter] into k
solves it. Obviously it will be slower - but "slow and correct" beats "fast 
and wrong" :-)

-- Alex.

___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-07-01 Thread Richard Gaskin
Alex Tweedly wrote:
At 00:25 01/07/2004 -0700, Richard Gaskin wrote:
My post from 14 June 2002 with my own CSV2Tab function is at
.
Hats off to anyone who can improve it's speed, and a bottle of
12-year-old single malt to anyone who can come up with an algorithm I
can use which is at least twice as fast.
Now there's a challenge I can relate to :-)
BUT - the speed of the conversion depends on the data ...
Enclosed below is a version of the script which is between 10% and 90%
faster - and probably has potential to go even faster than that.
It uses the same set up as you did - so there are no quotes left
except those around fields.
Then instead of walking through the data char by char, it use
"split()" to divide into an array; the array elements must then
alternate between in-quotes and not-in-quotes.
Each array element has only the relevant processing applied.
Great stuff.  Using split is a ingenious way to reduce the load.
If CSV were consistently implemented CSV2TabNew would work excellently 
right out of the box, but since some CSVs escape quotes by doubling them 
I needed to add one line (see below) to also substitute doubled quote 
chars with the quote placeholder.

Bonus: since the added line reduces the number of quote characters, the 
function is now even faster.

I ran 1000 iterations of all three algorithms on a small test file which 
uses the Excel escaping format of doubled quotes (I believe it's MS 
Access that uses slash-quote to escape, if memory serves).  Average 
times on my machine (G4 PowerBook) are roughly:

CSV2Tab:438ms
CSV2TabNew: 369ms
CSV2Tab3:   195ms
On a slightly more complex example the times were:
CSV2Tab:892ms
CSV2TabNew: 333ms
CSV2Tab3:   153ms
Here's CSV2Tab3:
function CSV2Tab3 pData
  local tNuData -- contains tabbed copy of data
  local tReturnPlaceholder -- replaces cr in field data to avoid line
  --   breaks which would be misread as records;
  --   replaced later during dislay
  local tEscapedQuotePlaceholder -- used for keeping track of quotes
  --   in data
  local tInQuotedText -- flag set while reading data between quotes
  --
  put numtochar(11) into tReturnPlaceholder -- vertical tab as
  --   placeholder
  put numtochar(2)  into tEscapedQuotePlaceholder -- used to simplify
  --   distinction between quotes in data and those
  --   used in delimiters
  --
  -- Normalize line endings:
  replace crlf with cr in pData  -- Win to UNIX
  replace numtochar(13) with cr in pData -- Mac to UNIX
  --
  -- Put placeholder in escaped quote (non-delimiter) chars:
  replace ("\""e) with tEscapedQuotePlaceholder in pData
  replace quote"e with tEscapedQuotePlaceholder in pData --
Please send me a private email and we'll make arrangements for the 
scotch delivery.  Thanks for the assist.

--
 Richard Gaskin
 Fourth World Media Corporation
 ___
 Rev tools and more:  http://www.fourthworld.com/rev

___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-07-01 Thread Mark Wieder
Richard-

Thursday, July 1, 2004, 12:25:54 AM, you wrote:

RG> [Semi-OT link - US Gov. warns against MS Explorer:
RG> ]

Thanks for the link. I hadn't seen that one. Not that I use IE unless
I absolutely *have* to, anyway...

RG> My post from 14 June 2002 with my own CSV2Tab function is at
RG> .

Looks good, but I think you mean "put false into tInQuotedText"
instead of "put empty..."

RG> IMNSHO, CSV2Tab should be a built-in function.  If there's some
RG> agreement on this and a willingness to vote for it I'll post the request
RG> to Bugzilla.

At the very least it should be enshrined at your web site. I wouldn't
have thought of combing through the archives for this. I've got it
pasted into Scripter's Scrapbook now.

-- 
-Mark Wieder
 [EMAIL PROTECTED]

___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-07-01 Thread Alex Tweedly
At 00:25 01/07/2004 -0700, Richard Gaskin wrote:
I asked around on this some time ago, including quite a few programmers 
far smarter than me.  The best algorithm we could come up with was one 
which walks through the data char by char, keeping track of when it's in 
field data and when it leaves the field, noting that commas are escaped
inconsistently in MS products and not all fields have their data
enclosed in quotes (FM Pro-exported CSV does, but it's a smarter tool in
general than most of the oddities that come out of Redmond ).

My post from 14 June 2002 with my own CSV2Tab function is at
.
Hats off to anyone who can improve it's speed, and a bottle of
12-year-old single malt to anyone who can come up with an algorithm I
can use which is at least twice as fast.
Now there's a challenge I can relate to :-)
BUT - the speed of the conversion depends on the data ...
Enclosed below is a version of the script which is between 10% and 90% 
faster - and probably has potential to go even faster than that.

It uses the same set up as you did - so there are no quotes left except 
those around fields.

Then instead of walking through the data char by char, it use "split()" to 
divide into an array; the array elements must then alternate between 
in-quotes and not-in-quotes.

Each array element has only the relevant processing applied.
Note - the speed of the original is (roughly) based on the number of 
characters, while the speed of the new version is (very roughly) based on 
the number of quoted fields - so for a file of mainly short fields, all of 
which are quoted, it is only 10% or so faster (and there could be cases 
where it would even be slower). For a file with many unquoted fields, or 
where each field is quite large, it will be significantly faster.

function CSV2TabNew pData
  local tNuData -- contains tabbed copy of data
  local tReturnPlaceholder -- replaces cr in field data to avoid line
  --  breaks which would be misread as records;
  --  replaced later during dislay
  local tEscapedQuotePlaceholder -- used for keeping track of quotes in data
  local tInQuotedText -- flag set while reading data between quotes
  --
  put numtochar(11) into tReturnPlaceholder -- vertical tab as placeholder
  put numtochar(2)  into tEscapedQuotePlaceholder -- used to simplify
  --   distinction between quotes in data and those
  --   used in delimiters
  --
  -- Normalize line endings:
  replace crlf with cr in pData  -- Win to UNIX
  replace numtochar(13) with cr in pData -- Mac to UNIX
  --
  -- Put placeholder in escaped quote (non-delimiter) chars:
  replace ("\""e) with tEscapedQuotePlaceholder in pData
  --
  put space before pData   -- to avoid ambiguity of starting context
  split pData by quote
  put False into tInsideQuoted
  repeat for each element k in pData
if (tInsideQuoted) then
  replace cr with tReturnPlaceholder in k
  put k after tNuData
  put False into tInsideQuoted
else
  replace comma with tab in k
  put k after tNuData
  put true into tInsideQuoted
end if
  end repeat
  --
  delete char 1 of tNuData -- remove the leading space
  replace tEscapedQuotePlaceholder with quote in tNuData
  return tNuData
end CSV2TabNew
Note also - this has about the same number of "fragilities" as the original 
(they both fail if the file is mal-formed in about the same number of 
ways).  They also both fail if the original data contained any "escape"s 
(i.e. "\" chars) - they would be doubled in the original data and should be 
checked for before the set-up.


IMNSHO, CSV2Tab should be a built-in function.  If there's some agreement 
on this and a willingness to vote for it I'll post the request to Bugzilla.
I'd suggest requesting that it be parameterized to handle the common 
variants of quoting and non-quoting. There's a good discussion of the 
problem (including ways it can go wrong beyond what we've talked about 
here), and a public domain implementation at
http://www.python.org/peps/pep-0305.html#id7

The interface is perhaps wrong for Transcript, but the range of solutions 
it covers would be a good place to start.

-- Alex Tweedly.

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.707 / Virus Database: 463 - Release Date: 15/06/2004
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-07-01 Thread Richard Gaskin
Mark Wieder wrote:
I really wish the csv format had never been invented. Separating
fields with tabs works much better, and separating them with
non-printing characters is better yet.
Amen to that, brother.  I guess the clue train doesn't stop in Redmond. ;)
[Semi-OT link - US Gov. warns against MS Explorer:
]
I once spent an evening with some friends trying to find a less 
efficient tabular format than CSV, and even with our best effort we 
couldn't think of a way to waste more clock cycles parsing as few 
characters as required by the absurb CSV format.

Extra bonus points that CSV is implemented differently in different MS
products (varying escape sequences).  It seems Redmond takes their own
formats as seriously as they take security concerns.
MisterX wrote:
You mean my importer didn't work?
Send me a small sample of a non working csv to my email (not the list)
and I'll see if it can be fixed. Let me also know which record is wrong.
Yours was a very smart effort, and for a moment I was hoping you'd
found the holy grail of scripting, an efficient means of parsing CSV.
Alas, if I read your algorithm correctly it parses line by line,
making the assumption that there are no returns in field data.  I had
tried that once myself, but my customers have since made it clear to me 
that CSV allows returns in data.  It seems the trick is to differentiate 
between return chars within data and returns used to delimit data, 
noting that they are not normally escaped in most products (FM Pro 
wisely substitutes them with a non-printing character, ASCII 11, but
Redmond shows no such wisdom).

I asked around on this some time ago, including quite a few programmers 
far smarter than me.  The best algorithm we could come up with was one 
which walks through the data char by char, keeping track of when it's in 
field data and when it leaves the field, noting that commas are escaped
inconsistently in MS products and not all fields have their data
enclosed in quotes (FM Pro-exported CSV does, but it's a smarter tool in
general than most of the oddities that come out of Redmond ).

My post from 14 June 2002 with my own CSV2Tab function is at
.
Hats off to anyone who can improve it's speed, and a bottle of
12-year-old single malt to anyone who can come up with an algorithm I
can use which is at least twice as fast.
IMNSHO, CSV2Tab should be a built-in function.  If there's some 
agreement on this and a willingness to vote for it I'll post the request 
to Bugzilla.

--
 Richard Gaskin
 Fourth World Media Corporation
 ___
 Rev tools and more:  http://www.fourthworld.com/rev

___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-30 Thread Mark Wieder
MisterX-

Wednesday, June 30, 2004, 12:01:39 AM, you wrote:

M> You mean my importer didn't work?

No, I was just ranting in general. No worries.

I'll go back to talking among myself now.

-- 
-Mark Wieder
 [EMAIL PROTECTED]

___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


RE: Another Revolution Success Story

2004-06-29 Thread MisterX
on a 7000 records basis that would be even slower...
send me a script change, I'll consider the improvement...

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Troy
> Rollins
> Sent: Wednesday, June 30, 2004 04:41
> To: How to use Revolution
> Subject: Re: Another Revolution Success Story
> 
> 
> 
> On Jun 29, 2004, at 10:31 PM, Mark Wieder wrote:
> 
> > I really wish the csv format had never been invented. Separating
> > fields with tabs works much better, and separating them with
> > non-printing characters is better yet.
> 
> This is why regex exists. Parsing of csv is probably best done with a 
> matchText routine.
> --
> Troy
> RPSystems, Ltd.
> http://www.rpsystems.net
> 
> ___
> use-revolution mailing list
> [EMAIL PROTECTED]
> http://lists.runrev.com/mailman/listinfo/use-revolution
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-29 Thread Troy Rollins
On Jun 29, 2004, at 10:31 PM, Mark Wieder wrote:
I really wish the csv format had never been invented. Separating
fields with tabs works much better, and separating them with
non-printing characters is better yet.
This is why regex exists. Parsing of csv is probably best done with a 
matchText routine.
--
Troy
RPSystems, Ltd.
http://www.rpsystems.net

___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-29 Thread Mark Wieder
MisterX-

Tuesday, June 29, 2004, 1:52:18 PM, you wrote:

M> It works for really overly simple csv files...

I really wish the csv format had never been invented. Separating
fields with tabs works much better, and separating them with
non-printing characters is better yet.

I'm getting tired of running into data fields containing things like
"John Brown, Jr.", "123 Fourth St, Room 5" and "The Corporation, Inc".

-- 
-Mark Wieder
 [EMAIL PROTECTED]

___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


RE: Another Revolution Success Story - yet another release!

2004-06-29 Thread Alex Tweedly
At 22:42 29/06/2004 +0200, MisterX wrote:
Hi everyone,
Here you are, a pretty compliant CSV importer for RunRev...
Sorry, it's not quite that simple :-)
I see two problems
1. You have a small bug, in that you only replace commas within a quoted 
field in the first occurrence on each line. The section  -- convert csv 
escaped chars
needs to be a loop, not an if condition

2. Excel default is to use doubled-quotes (duplicated-quotes) to represent 
a quote within a quoted field, not an escaped-quote. (I left in the 
fixing-up of escaped quotes, which is probably incorrect).

Here's a diff to change that  though since I'm not experienced with 
Transcript, you can probably re-write it better than this.

-- Alex.

62,63c62,65
< if quote is in thisline then
<   put offset(quote, thisline) into a
---
> put 1 into start
> repeat forever
>   put offset(quote, thisline, start) into a
>   if a == 0 then exit repeat
65c67,70
<   if b > a then
---
>   if b == a+1 then
>   put "\quotex" into char a to a+1 of thisline
>   else
> if b == 0 then exit repeat
67a73
>   put b+6 into start  - to account for substitution just made
-- Alex.

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.707 / Virus Database: 463 - Release Date: 15/06/2004
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-29 Thread Pierre Sahores
Oh là là, comme on dit chez nous... What's an usefull stuff, Xavier. 
Can spend time in test and debug, if that can help ;)

Le 29 juin 04, à 06:11, MisterX a écrit :
I wrote one to import an excel made csv.
There was no published format that I could find though...
It is a "bit" unfinished (about 5% missing) in the
translation matrix...
I'll polish it today and try to fix the errors, try to
find the source of the data to put in the proper credits.
The database contains a few thousands NT events errors and
codes reference.
cheers
Xavier

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Alex
Tweedly
Sent: Tuesday, June 29, 2004 03:51
To: [EMAIL PROTECTED]
Subject: Re: Another Revolution Success Story
At 18:02 28/06/2004 -0400, Troy Rollins wrote:

On Jun 28, 2004, at 5:40 PM, Kurt Kaufman wrote:
A physician in our area had a need for an application which
could easily
import patient data from a text file (multiple rows of 
comma-delineated
data).  It was a simple task, using Revolution, to open the
file and read
the data to a field.  Subsequently a "model" card was cloned and the
appropriate data sent to various fields on each card. Over a 
thousand
records were created this way in a few minutes.The only stumbling 
block
involved my forgetting that a standalone cannot modify itself.
No matter
though; I merely created an invisible "starter app" that immediately
opened a data stack in the data folder.
The application works "as advertised".
The doctor was impressed that it was possible to do this in a few 
hours!
String handling apps DO seem to be Rev's forte.
yes, string handling is pretty good - but I'm surprised that Rev has 
no
built-in support for CSV files. They are a pretty common interchange
format, but handling the variations commonly found makes it
non-trivial to
do this properly - quoted fields, delimiter in quoted fields, escaped 
or
doubled quotes within a field, etc.

Has anyone written and contributed a library to avoid others having to
"roll-your-own" ?
(*) - I should probably say "appears to have no built-in support" - 
there
may be something that I just can't find in the docs.

-- Alex.
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution

--
Bien cordialement, Pierre Sahores
100, rue de Paris
F - 77140 Nemours
[EMAIL PROTECTED]
GSM:   +33 6 03 95 77 70
Pro:  +33 1 41 60 52 68
Dom:+33 1 64 45 05 33
Fax:  +33 1 64 45 05 33
Inspection académique de Seine-Saint-Denis
Applications et SGBD ACID SQL (WEB et PGI)
Penser et produire "delta de productivité"
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-29 Thread Dave Cragg
At 3:21 pm -0500 29/6/04, Jim wrote:
On Jun 28, 2004, at 10:01 PM, Brian Yennie wrote:
 Perhaps there's a good CSV-to-tab-delimited converter out there =).
replace comma with tab in tData :-)
If only it were that easy. But commas may  occur within a field, in 
which case, the field is usually quoted.

I remember Richard Gaskin brought up the same problem some 
months/years ago, but I don't recall a solution being published.

Dave
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


RE: Another Revolution Success Story

2004-06-29 Thread MisterX
It works for really overly simple csv files...

But it's too simple alas... Check out my code in the posted stack...

Enjoy the script it takes to understand MS CSVs...
Surely I didn't exhaust all cases...

Xa

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Jim
> Sent: Tuesday, June 29, 2004 22:22
> To: How to use Revolution
> Subject: Re: Another Revolution Success Story
> 
> 
> On Jun 28, 2004, at 10:01 PM, Brian Yennie wrote:
> 
> > Perhaps there's a good CSV-to-tab-delimited converter out there =).
> 
> replace comma with tab in tData :-)
> 
> (Can you replace comma with tab in url "file://path/to/file.csv" ??)
> 
> Jim.
> 
> ___
> use-revolution mailing list
> [EMAIL PROTECTED]
> http://lists.runrev.com/mailman/listinfo/use-revolution
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


RE: Another Revolution Success Story - yet another release!

2004-06-29 Thread MisterX
Hi everyone,

Here you are, a pretty compliant CSV importer for RunRev...

Imports Excel CSV (with returns and quotes in the MS Xcel fields
and all kinds of craps including filtering out linefeeds and the
like that usualy slip in the process! Just took a while to clean
that up! I didn't bother with the export function but I can be
bribed...

http://monsieurx.com/modules.php?name=Downloads&d_op=getit&lid=55

Hope you like it! Just 12 KBs...

A couple updates expected over this week just in case you are patient.
But dont hold your breath, it was just a finishing touch to an old stack.

Dont bother with bugs or features, Im not interested. If you want to
fix them and share them, I'll post them with credits. 

This is Open source, and freeware... But I'll take any bribes, beer 
or wine bottles (cases, boxes, and barrils too ;)

cheers
Xavier
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-29 Thread Jim
On Jun 28, 2004, at 10:01 PM, Brian Yennie wrote:
Perhaps there's a good CSV-to-tab-delimited converter out there =).
replace comma with tab in tData :-)
(Can you replace comma with tab in url "file://path/to/file.csv" ??)
Jim.
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


RE: Another Revolution Success Story

2004-06-28 Thread MisterX
I wrote one to import an excel made csv.
There was no published format that I could find though...

It is a "bit" unfinished (about 5% missing) in the
translation matrix...

I'll polish it today and try to fix the errors, try to
find the source of the data to put in the proper credits.
The database contains a few thousands NT events errors and
codes reference.

cheers
Xavier


> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of Alex
> Tweedly
> Sent: Tuesday, June 29, 2004 03:51
> To: [EMAIL PROTECTED]
> Subject: Re: Another Revolution Success Story
>
>
> At 18:02 28/06/2004 -0400, Troy Rollins wrote:
>
>
> >On Jun 28, 2004, at 5:40 PM, Kurt Kaufman wrote:
> >
> >>A physician in our area had a need for an application which
> could easily
> >>import patient data from a text file (multiple rows of comma-delineated
> >>data).  It was a simple task, using Revolution, to open the
> file and read
> >>the data to a field.  Subsequently a "model" card was cloned and the
> >>appropriate data sent to various fields on each card. Over a thousand
> >>records were created this way in a few minutes.The only stumbling block
> >>involved my forgetting that a standalone cannot modify itself.
> No matter
> >>though; I merely created an invisible "starter app" that immediately
> >>opened a data stack in the data folder.
> >>The application works "as advertised".
> >>The doctor was impressed that it was possible to do this in a few hours!
> >
> >String handling apps DO seem to be Rev's forte.
>
> yes, string handling is pretty good - but I'm surprised that Rev has no
> built-in support for CSV files. They are a pretty common interchange
> format, but handling the variations commonly found makes it
> non-trivial to
> do this properly - quoted fields, delimiter in quoted fields, escaped or
> doubled quotes within a field, etc.
>
> Has anyone written and contributed a library to avoid others having to
> "roll-your-own" ?
>
> (*) - I should probably say "appears to have no built-in support" - there
> may be something that I just can't find in the docs.
>
>
> -- Alex.
>

___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-28 Thread Brian Yennie
yes, string handling is pretty good - but I'm surprised that Rev has 
no built-in support for CSV files. They are a pretty common 
interchange format, but handling the variations commonly found makes 
it non-trivial to do this properly - quoted fields, delimiter in 
quoted fields, escaped or doubled quotes within a field, etc.

Has anyone written and contributed a library to avoid others having to 
"roll-your-own" ?
FWIW, almost all tools that produce CSV files can also do tab 
delimited, which is much easier to deal with.
Heck, why anyone does anything other than picking a row and column 
delimiter and replacing them with escape codes if necessary in the 
actual data is beyond me.

Perhaps there's a good CSV-to-tab-delimited converter out there =).
Given tab-delimited data it's real easy to do lots of things:
set the itemDelimiter to tab
repeat for each line l in someData
   put item 1 of l into field1
   put item 2 of l into field2
   replace "\r" with return into field2
   replace "\t" with tab in field2
   ...
end repeat
...
put line 42 of someData into record42
...
sort lines of someData by item 3 of each
...
split someData using return and tab
put someData[115] into recordID115
etc...
- Brian
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-28 Thread Alex Tweedly
At 18:02 28/06/2004 -0400, Troy Rollins wrote:

On Jun 28, 2004, at 5:40 PM, Kurt Kaufman wrote:
A physician in our area had a need for an application which could easily 
import patient data from a text file (multiple rows of comma-delineated 
data).  It was a simple task, using Revolution, to open the file and read 
the data to a field.  Subsequently a "model" card was cloned and the 
appropriate data sent to various fields on each card. Over a thousand 
records were created this way in a few minutes.The only stumbling block 
involved my forgetting that a standalone cannot modify itself.  No matter 
though; I merely created an invisible "starter app" that immediately 
opened a data stack in the data folder.
The application works "as advertised".
The doctor was impressed that it was possible to do this in a few hours!
String handling apps DO seem to be Rev's forte.
yes, string handling is pretty good - but I'm surprised that Rev has no 
built-in support for CSV files. They are a pretty common interchange 
format, but handling the variations commonly found makes it non-trivial to 
do this properly - quoted fields, delimiter in quoted fields, escaped or 
doubled quotes within a field, etc.

Has anyone written and contributed a library to avoid others having to 
"roll-your-own" ?

(*) - I should probably say "appears to have no built-in support" - there 
may be something that I just can't find in the docs.

-- Alex.

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.707 / Virus Database: 463 - Release Date: 15/06/2004
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-28 Thread Brian Yennie
A physician in our area had a need for an application which could 
easily import patient data from a text file (multiple rows of 
comma-delineated data).  It was a simple task, using Revolution, to 
open the file and read the data to a field.  Subsequently a "model" 
card was cloned and the appropriate data sent to various fields on 
each card. Over a thousand records were created this way in a few 
minutes.The only stumbling block involved my forgetting that a 
standalone cannot modify itself.  No matter though; I merely created 
an invisible "starter app" that immediately opened a data stack in 
the data folder.
The application works "as advertised".
The doctor was impressed that it was possible to do this in a few 
hours!
String handling apps DO seem to be Rev's forte.
Speaking which, thought I'd share this one as another positive piece: 
for my current project I needed to do a large data conversion over the 
weekend. SQL database to new, upgrade, totally incompatible new 
database. About 50,000 records over 20 tables, some 50+ foreign key 
constraints translated to about 50 new tables. Using off-the-top of my 
head scripts in a Rev stack riddled with fields full of conversion 
tables, it was real dirty, but it worked- and the entire system was 
converted and proofed in a weekend by 2 people. And the rules were 
icky, along the lines of:

"Maintain the order of records in the old system which alphabetized 
things in the software, but makes no record of it in the database"

"Delete records that contain the word XXX, and oh, there are probably a 
few typos"

"Change the 3-level structure to N-levels, where anything with the 
words XXX or YYY means that it belongs under the previous record 
containing ZZZ... and by previous we mean the aforementioned 'order'".

Etc- new fields, new relationships, new constants, re-created keys, 
yeck, yeck, yeck...

 I NEVER would have been able to do it in any other language that I 
know of - kudos!

- Brian
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-28 Thread Marian Petrides
I was hoping the same thing, but wasn't bold enough to ask--until 
someone else did.  Please do consider sharing them with us.

M
On Jun 28, 2004, at 8:59 PM, [EMAIL PROTECTED] wrote:
Can you share the scripts you used  to do this project?
jack
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-28 Thread Revinfo1155
Can you share the scripts you used  to do this project?
 
jack
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Re: Another Revolution Success Story

2004-06-28 Thread Troy Rollins
On Jun 28, 2004, at 5:40 PM, Kurt Kaufman wrote:
A physician in our area had a need for an application which could 
easily import patient data from a text file (multiple rows of 
comma-delineated data).  It was a simple task, using Revolution, to 
open the file and read the data to a field.  Subsequently a "model" 
card was cloned and the appropriate data sent to various fields on 
each card. Over a thousand records were created this way in a few 
minutes.The only stumbling block involved my forgetting that a 
standalone cannot modify itself.  No matter though; I merely created 
an invisible "starter app" that immediately opened a data stack in the 
data folder.
The application works "as advertised".
The doctor was impressed that it was possible to do this in a few 
hours!
String handling apps DO seem to be Rev's forte.
--
Troy
RPSystems, Ltd.
http://www.rpsystems.net
___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution


Another Revolution Success Story

2004-06-28 Thread Kurt Kaufman
A physician in our area had a need for an application which could easily 
import patient data from a text file (multiple rows of comma-delineated 
data).  It was a simple task, using Revolution, to open the file and read 
the data to a field.  Subsequently a "model" card was cloned and the 
appropriate data sent to various fields on each card. Over a thousand 
records were created this way in a few minutes.The only stumbling block 
involved my forgetting that a standalone cannot modify itself.  No matter 
though; I merely created an invisible "starter app" that immediately opened 
a data stack in the data folder.
The application works "as advertised".
The doctor was impressed that it was possible to do this in a few hours!
-KK

_
Watch the online reality show Mixed Messages with a friend and enter to win 
a trip to NY 
http://www.msnmessenger-download.click-url.com/go/onm00200497ave/direct/01/

___
use-revolution mailing list
[EMAIL PROTECTED]
http://lists.runrev.com/mailman/listinfo/use-revolution