Re: Web Apps

2010-02-22 Thread Elizabeth Mattijsen
Unless that has changed, but downloading the SDK is free.  If you want to run 
your apps in an iP*, you need to pay the 99$ yearly fee to get the right 
certificates.  As long as you're happy running in the simulator, you should be 
fine, AFAIK.


Liz
==
On Feb 22, 2010, at 11:16 PM, Celeste Suliin Burris wrote:
> Thanks for the info - I was interested in iPad apps, but put off by the $99 
> just to download and look over the SDK.
> 
> A web app sounds better - you wouldn't have to write a different one for 
> every smartphone.
> 
> 
> -Original Message-
>> From: Bill Stephenson 
>> Sent: Feb 21, 2010 4:19 PM
>> To: Perl MacOSX 
>> Subject: Web Apps
>> 
>> I started playing with iPhone/iTouch/iPad "web apps" just last week.
>> 
>> http://developer.apple.com/safari/library/navigation/ 
>> index.html#section=Resource%20Types&topic=Coding%20How-Tos
>> 
>> Apple has made it incredibly easy to create a web app that runs exactly  
>> like a native app on these devices.
>> 
>> Of course, perl is a perfect server side language to power these apps,  
>> and BBEdit and Perl on a Mac make the perfect IDE to create these web  
>> apps.
>> 
>> While poking around there I also found out that Safari on the Mac OS  
>> also provides some big enhancements for web based apps now too. Check  
>> this out:
>> 
>> "Safari on iPhone, Mac OS X, and Windows all implement the Offline Web  
>> Applications feature of HTML5. This feature allows you to cache all of  
>> the resource files for your web application on the client, improving  
>> the load time of your application and making it possible to create an  
>> application which is fully functional even when there is no network  
>> connection."
>> 
>> (source:  
>> http://developer.apple.com/safari/library/codinghowtos/Desktop/ 
>> DataManagement/index.html)
>> 
>> This is actually fulfilling a vision I expressed right here waaay back  
>> in 2005:
>> 
>> http://www.mail-archive.com/macosx@perl.org/msg08946.html
>> 
>> Geez, It's like they've been working all this time for me entirely for  
>> free ;)
>> 
>> Seriously, according to the news this week it now looks like most all  
>> "Smart Phone" makers will adapt a similar, if not the same, approach to  
>> web based apps that run on these devices.
>> 
>> Think about it, Apple knows that laptops and desktops need to be able  
>> to run these same applications because it provides a fast and  
>> inexpensive way for developers to integrate the use of these  
>> applications with these different devices. Users want that, and they  
>> want them to "Feel" like a native application too. Apple is essentially  
>> giving them that.
>> 
>> So, looking forward it's easy to imagine that many "Native" apps will  
>> really be "Web Apps". The client side will contain the necessary tools  
>> to run them. Updates and upgrades happen "at the atomic level" on the  
>> server side and are instant and seamless and distributed as soon as the  
>> software is accessed. (that's something I learned right here ;)
>> 
>> The advantages to developers both small and large are huge. I now  
>> believe this is exactly where Apple is heading and as you can imagine,  
>> I'm absolutely thrilled about it :)
>> 
>> --
>> 
>> Bill Stephenson
>> 
> 
> 
> 



Re: Web Apps

2010-02-22 Thread Celeste Suliin Burris
Thanks for the info - I was interested in iPad apps, but put off by the $99 
just to download and look over the SDK.

A web app sounds better - you wouldn't have to write a different one for every 
smartphone.


-Original Message-
>From: Bill Stephenson 
>Sent: Feb 21, 2010 4:19 PM
>To: Perl MacOSX 
>Subject: Web Apps
>
>I started playing with iPhone/iTouch/iPad "web apps" just last week.
>
>http://developer.apple.com/safari/library/navigation/ 
>index.html#section=Resource%20Types&topic=Coding%20How-Tos
>
>Apple has made it incredibly easy to create a web app that runs exactly  
>like a native app on these devices.
>
>Of course, perl is a perfect server side language to power these apps,  
>and BBEdit and Perl on a Mac make the perfect IDE to create these web  
>apps.
>
>While poking around there I also found out that Safari on the Mac OS  
>also provides some big enhancements for web based apps now too. Check  
>this out:
>
>"Safari on iPhone, Mac OS X, and Windows all implement the Offline Web  
>Applications feature of HTML5. This feature allows you to cache all of  
>the resource files for your web application on the client, improving  
>the load time of your application and making it possible to create an  
>application which is fully functional even when there is no network  
>connection."
>
>(source:  
>http://developer.apple.com/safari/library/codinghowtos/Desktop/ 
>DataManagement/index.html)
>
>This is actually fulfilling a vision I expressed right here waaay back  
>in 2005:
>
>http://www.mail-archive.com/macosx@perl.org/msg08946.html
>
>Geez, It's like they've been working all this time for me entirely for  
>free ;)
>
>Seriously, according to the news this week it now looks like most all  
>"Smart Phone" makers will adapt a similar, if not the same, approach to  
>web based apps that run on these devices.
>
>Think about it, Apple knows that laptops and desktops need to be able  
>to run these same applications because it provides a fast and  
>inexpensive way for developers to integrate the use of these  
>applications with these different devices. Users want that, and they  
>want them to "Feel" like a native application too. Apple is essentially  
>giving them that.
>
>So, looking forward it's easy to imagine that many "Native" apps will  
>really be "Web Apps". The client side will contain the necessary tools  
>to run them. Updates and upgrades happen "at the atomic level" on the  
>server side and are instant and seamless and distributed as soon as the  
>software is accessed. (that's something I learned right here ;)
>
>The advantages to developers both small and large are huge. I now  
>believe this is exactly where Apple is heading and as you can imagine,  
>I'm absolutely thrilled about it :)
>
>--
>
>Bill Stephenson
>





Web Apps

2010-02-22 Thread Bill Stephenson

I started playing with iPhone/iTouch/iPad "web apps" just last week.

http://developer.apple.com/safari/library/navigation/ 
index.html#section=Resource%20Types&topic=Coding%20How-Tos


Apple has made it incredibly easy to create a web app that runs exactly  
like a native app on these devices.


Of course, perl is a perfect server side language to power these apps,  
and BBEdit and Perl on a Mac make the perfect IDE to create these web  
apps.


While poking around there I also found out that Safari on the Mac OS  
also provides some big enhancements for web based apps now too. Check  
this out:


"Safari on iPhone, Mac OS X, and Windows all implement the Offline Web  
Applications feature of HTML5. This feature allows you to cache all of  
the resource files for your web application on the client, improving  
the load time of your application and making it possible to create an  
application which is fully functional even when there is no network  
connection."


(source:  
http://developer.apple.com/safari/library/codinghowtos/Desktop/ 
DataManagement/index.html)


This is actually fulfilling a vision I expressed right here waaay back  
in 2005:


http://www.mail-archive.com/macosx@perl.org/msg08946.html

Geez, It's like they've been working all this time for me entirely for  
free ;)


Seriously, according to the news this week it now looks like most all  
"Smart Phone" makers will adapt a similar, if not the same, approach to  
web based apps that run on these devices.


Think about it, Apple knows that laptops and desktops need to be able  
to run these same applications because it provides a fast and  
inexpensive way for developers to integrate the use of these  
applications with these different devices. Users want that, and they  
want them to "Feel" like a native application too. Apple is essentially  
giving them that.


So, looking forward it's easy to imagine that many "Native" apps will  
really be "Web Apps". The client side will contain the necessary tools  
to run them. Updates and upgrades happen "at the atomic level" on the  
server side and are instant and seamless and distributed as soon as the  
software is accessed. (that's something I learned right here ;)


The advantages to developers both small and large are huge. I now  
believe this is exactly where Apple is heading and as you can imagine,  
I'm absolutely thrilled about it :)


--

Bill Stephenson



Re: [OT] MySQL for Web Apps

2004-02-06 Thread Matthew Montano
XML vs. SQL hmm.

It's worth recalling *one of* the rationales behind XML: When bytes 
were expensive, machine to machine communication especially across 
company boundaries (read EDI) couldn't afford to be self-documenting. 
Huge binders of ANSI EDI specifications were required to correctly 
parse trickles of ASCII characters coming across x.400 VANs.

EDI consultants made their living on the fact that no two specification 
binders were written the same.

XML, and our world of cheap bytes puts those Spec. Binders in the 
actual document itself and EDI consultants are less valuable.

Now in the 'good-ol-days', we didn't store random access data in EDI 
files. Not sure of the benefit of doing the same today.

Just showing my age and opinions.

Matthew

http://www.redmac.ca - Getting Canadian's their Macintosh accessories
http://www.justaddanoccasion.com - Great gift ideas, featuring smoked 
salmon

On Feb 4, 2004, at 9:24 AM, Chris Devers wrote:

I can give a longer reply later, but it's my birthday and I'm about to 
go
out for a late breakfast & a movie :)

On Wed, 4 Feb 2004, Bill Stephenson wrote:

Chris, you've almost convinced me, but I have to ask, is it really so
inefficient to search through one directory with 5000 sub-directories
to find one that matches the (user) name your looking for? Isn't that
what Perl is supposed to be good at?
I'll turn the question around and let you try answering it yourself:

Which do you think will be faster, traversing a directory tree, 
scanning
the contents of N-thousand trees & files looking for something, or
grepping the contents of one file looking for what you want? I.e., 
which
is likely to be faster -- this --

$ find /var/lib/xmldb/app1/ -type f | xargs grep 'foo'

-- or this --

$ grep 'foo' /var/lib/csvdb/app1/record.csv

?

My hunch is that the second approach will be *way* faster almost 
always.

Now of course that's a biased example, and you could do a lot to speed 
up
the first approach by pruning the tree that you're digging in. But, I
still think the basic point holds: if you keep the data in one file in 
a
well organized way, that's always likely to be faster.

If, as a lot of people say, MySQL is basically just a SQL interface to
your filesystem, then MySQL is closer than you might think to the basic
"grep /pattern/ file" approach. My hunch is that most XML solutions, 
even
very good ones, are structurally going to be more similar to the bigger
"find /path | xargs grep" approach.

I'm willing to be proven wrong here, but I am reasonably sure about 
this.

If you had 100,000 directories you could alphabetize and place them in
sub-directories that would hold, on average, less than the 5000
mentioned above.
It's called a B-Tree algorithm, and a lot of databases will already 
have
invisible mechanisms in place for you to do this out of the box.

Would you rather be re-implementing fundamental algorithms in your 
app, or
can you trust some database vendor to have already done the work for 
you,
so that you can just say "index fields A, B, and C", and you can get on
with the application-specific bits of your work?

Put yet another, snarkier way, if you'd rather be doing the basic 
search &
retrieval stuff by hand, why aren't you using C/C++ instead of Perl? :)

Really though, I do think a framework like this should be easiest:

  * Data managed by some kind of database, even a toy one like MySQL 
or a
pseudo one like SQLite.

  * A thin layer of Perl to insert & retrieve your data

  * A template engine like Template Toolkit or HTML::Template to, if 
you
choose, wrap your data in XML syntax for exchange with others, 
and/or
to present your data through a web interface if that's what you 
need.

Honestly, the template engine could be the hardest part here, and it
really isn't that hard (that's why they exist -- to hide the hard bits 
for
you :). Keeping data in a database & retrieving it with something like
Perl/DBI, or the database access libraries for Python, PHP, Java, etc, 
is
all a Solved Problem. All that's left to do is read the docs :)

Here:

A Short Guide to DBI:

Cooking with Perl, Part 2 (talks about SQLite)

Database Programming with Perl:

DBI perldoc

Actual paper book, _Programming the Perl DBI_

Go read :)



and on that note, time to go...



--
Chris Devers



Re: [OT] XML in SQL (was: MySQL for Web Apps)

2004-02-05 Thread Rick Measham
On 6 Feb 2004, at 02:37 pm, Chris Devers wrote:
Anyway, it was pointed out to me in a different offlist response that I
was probably answering the wrong question. Oh well -- it still seems 
like
a useful (and under-publicized?) capability of the standard MySQL 
client,
so maybe bringing it up will still be of use to someone...
Very true .. I'm a PostGreSQLer rather than a MySQLer, and looking 
through the mailing lists there it looks like theres been talk of 
accessing the XML as data rather than as text ... h.. Feb 2003 .. I 
might poke some people!

Rick Measham
Senior Designer and Developer
Printaform Pty Ltd
Tel: (03) 9850 3255
Fax: (03) 9850 3277
http://www.printaform.com.au
http://www.printsupply.com.au
vcard: http://www.printaform.com.au/staff/rickm.vcf


Re: [OT] XML in SQL (was: MySQL for Web Apps)

2004-02-05 Thread Chris Devers
On Fri, 6 Feb 2004, Rick Measham wrote:

> On 6 Feb 2004, at 01:47 pm, Rick Measham wrote:
> > Thanks Christ,
> 
> erm .. sorry .. chris ..

Heh... :)

Anyway, it was pointed out to me in a different offlist response that I
was probably answering the wrong question. Oh well -- it still seems like
a useful (and under-publicized?) capability of the standard MySQL client,
so maybe bringing it up will still be of use to someone...



-- 
Chris Devers



Re: [OT] XML in SQL (was: MySQL for Web Apps)

2004-02-05 Thread Rick Measham
On 6 Feb 2004, at 01:47 pm, Rick Measham wrote:
Thanks Christ,
erm .. sorry .. chris ..

Rick Measham
Senior Designer and Developer
Printaform Pty Ltd
Tel: (03) 9850 3255
Fax: (03) 9850 3277
http://www.printaform.com.au
http://www.printsupply.com.au
vcard: http://www.printaform.com.au/staff/rickm.vcf


Re: [OT] XML in SQL (was: MySQL for Web Apps)

2004-02-05 Thread Rick Measham
On Wed, 4 Feb 2004, Rick Measham wrote:

I'd love to see an XML parser embedded into SQL so that I can have:
CREATE TABLE aTable (id serial, data XML);

On 5 Feb 2004, at 05:21 pm, Chris Devers replied:
Does this help?
snip

Is this along the lines of what you were hoping for?
Thanks Christ, but not really at all. What I want is the ability to use 
XML as a data type so that I can have a field full of XML that is 
searchable. The output would be the'zactly the same as current output. 
The input would be XML as a string:

> INSERT INTO data (id, user, data) VALUES (1, 
'rick','RickMeasham');
> INSERT INTO data (id, user, data) VALUES (2, 
'rick02','RickSmith');

# And then I could retrieve it:

> SELECT * FROM data WHERE data:user:name='Rick'

id |  user  |  data
---
 1 | rick   | RickMeasham
 2 | rick02 | RickSmith
(2 rows returned)
Cheers!
Rick Measham
Senior Designer and Developer
Printaform Pty Ltd
Tel: (03) 9850 3255
Fax: (03) 9850 3277
http://www.printaform.com.au
http://www.printsupply.com.au
vcard: http://www.printaform.com.au/staff/rickm.vcf


Re: [OT] MySQL for Web Apps

2004-02-04 Thread Chris Devers
On Wed, 4 Feb 2004, Rick Measham wrote:

> I'd love to see an XML parser embedded into SQL so that I can have: 
> CREATE TABLE aTable (id serial, data XML); 

Does this help?

% mysqldump --help | grep -i ml
  -X, --xml   Dump a database as well formed XML.

% mysql --help | grep -i ml
  -H, --html  Produce HTML output.
  -X, --xml   Produce XML output
html  FALSE
xml   FALSE

% mysql --version
mysql  Ver 12.22 Distrib 4.0.17, for apple-darwin7.2.0 (powerpc)

This is the current unstable version in Fink; I'm not sure that the
feature to output XML is present in the stable version, but I'd assume so. 

You're welcome to read the docs yourself to see how to use this, but
here's a sample:

% mysql -u cdevers -p -X -D movabletype
-e "select author_name from mt_author"
Enter password: 



  
cdevers
  

  
hanna
  

  
kelly
  

  
may
  


Is this along the lines of what you were hoping for?




-- 
Chris Devers



Re: [OT] MySQL for Web Apps

2004-02-04 Thread Chris Devers
I can give a longer reply later, but it's my birthday and I'm about to go
out for a late breakfast & a movie :)


On Wed, 4 Feb 2004, Bill Stephenson wrote:

> Chris, you've almost convinced me, but I have to ask, is it really so 
> inefficient to search through one directory with 5000 sub-directories 
> to find one that matches the (user) name your looking for? Isn't that 
> what Perl is supposed to be good at?

I'll turn the question around and let you try answering it yourself: 

Which do you think will be faster, traversing a directory tree, scanning
the contents of N-thousand trees & files looking for something, or
grepping the contents of one file looking for what you want? I.e., which
is likely to be faster -- this --

$ find /var/lib/xmldb/app1/ -type f | xargs grep 'foo'

-- or this --

$ grep 'foo' /var/lib/csvdb/app1/record.csv

?

My hunch is that the second approach will be *way* faster almost always. 

Now of course that's a biased example, and you could do a lot to speed up
the first approach by pruning the tree that you're digging in. But, I
still think the basic point holds: if you keep the data in one file in a
well organized way, that's always likely to be faster.

If, as a lot of people say, MySQL is basically just a SQL interface to
your filesystem, then MySQL is closer than you might think to the basic
"grep /pattern/ file" approach. My hunch is that most XML solutions, even
very good ones, are structurally going to be more similar to the bigger
"find /path | xargs grep" approach. 

I'm willing to be proven wrong here, but I am reasonably sure about this.
 
> If you had 100,000 directories you could alphabetize and place them in 
> sub-directories that would hold, on average, less than the 5000 
> mentioned above.

It's called a B-Tree algorithm, and a lot of databases will already have
invisible mechanisms in place for you to do this out of the box. 

Would you rather be re-implementing fundamental algorithms in your app, or
can you trust some database vendor to have already done the work for you,
so that you can just say "index fields A, B, and C", and you can get on
with the application-specific bits of your work? 

Put yet another, snarkier way, if you'd rather be doing the basic search &
retrieval stuff by hand, why aren't you using C/C++ instead of Perl? :)

Really though, I do think a framework like this should be easiest:

  * Data managed by some kind of database, even a toy one like MySQL or a
pseudo one like SQLite.

  * A thin layer of Perl to insert & retrieve your data

  * A template engine like Template Toolkit or HTML::Template to, if you
choose, wrap your data in XML syntax for exchange with others, and/or
to present your data through a web interface if that's what you need.

Honestly, the template engine could be the hardest part here, and it
really isn't that hard (that's why they exist -- to hide the hard bits for
you :). Keeping data in a database & retrieving it with something like
Perl/DBI, or the database access libraries for Python, PHP, Java, etc, is
all a Solved Problem. All that's left to do is read the docs :)

Here:

A Short Guide to DBI:


Cooking with Perl, Part 2 (talks about SQLite)


Database Programming with Perl:


DBI perldoc


Actual paper book, _Programming the Perl DBI_



Go read :)




and on that note, time to go...




-- 
Chris Devers



Re: [OT] MySQL for Web Apps

2004-02-04 Thread Pete Prodoehl
Ian Ragsdale wrote:
On Feb 4, 2004, at 1:59 AM, Bill Stephenson wrote:

The above are some of the excuses I've come up with to avoid spending 
more time learning stuff. If I'm deluded, it's because I have boxes 
upon boxes of software that doesn't work anymore and time invested in 
each of them. It's not that I don't believe that MySQL and other 
database engines have a place, I'm just trying to avoid learning how 
to use them if I don't really need too.


Personally I think it's worth it in the case of MySQL (or other 
relational databases).  The basics are pretty easily learned in an 
afternoon or two, and as your application and needs change, you'll 
definitely save yourself days worth of work by being able to leverage a 
good DB when your solution really calls for one.


Ditto to that! I had put off learning the needed stuff to tie Perl to 
SQL databases years ago, but once I did learn it (and it was a pretty 
quick lesson) it really paid off. As for outdated software, the basics 
of SQL are what, 25 years old? It's worth learning...

Pete




Re: [OT] MySQL for Web Apps

2004-02-04 Thread Conrad Schilbe
I've worked with both solutions and would like to say first off that it will
take you longer to implement a solid XML solution versus the MySQL solution.

The point made by others is indexing and retrieving records based on
indexes. You would rather say "Get 'bobs' record" Then "flip through all of
the records until you find 'bobs'"

So for the XML approach you would need to build binary indexes for each
field that is to be a lookup. Perl's DB_File could do this for you. The
index would reference the value of the field to the id of the file. DB_File
is beyond the scope of this list but there are lots of resources for it.
Other approaches to the indexing could be used.

You would also need to implement a file locking mechanism so that the same
file increment was not written by 2 processes. As well as locking for the
indexes which is handled mostly by DB_File.

This is going to be where your time is spent...

Implement indexing for your xml data or learn MySQL?

As far as performance, I actually think that the 2 are very comparable.

As for stability and data integrity, also comparable when you think about
the damage a corrupted database can do. There are ways to recover corrupted
databases though, were as with the XML, you would need to be able to rebuild
your indexes and have a backup of the XML files. Backup is important either
way really.

I like the flexibility XML offers. Although you may run into issues with
character escaping in that MySQL offers you ways to let it safely quote your
data for insertion. You will have to make sure that the data to be saved in
the XML files does not compromise its integrity.


C



On 2/3/04 9:16 PM, "Bill Stephenson" <[EMAIL PROTECTED]> wrote:

> If you're busy please forgive me and ignore this, if you have time to
> offer an opinion I'd really like to hear from this list on this
> subject;
> 
> If I am building a web app from the ground up, what's the best way to
> deal with storing/retrieving data? For arguments sake let's say the app
> will have 2500 users to begin with that each hit the server an average
> of 50 times a week. Each request delivers 40k of data. Users can search
> through their saved records where each record contains 5-50k of data.
> Users can have up to 2000 records. In 5 years the app will have, maybe,
> 25,000 users. In 10 years, say, 100,000 users. If it ever has more
> users than that, I'll write a help wanted message.
> 
> I'd like to store using XML in a separate text file for each record
> created because it's easy and gives me flexibility. I can add data
> fields without tweaking tables in a MySQL database. I can add users
> easily and keep their data in a separate directory that is easy to
> locate. I'm told that storing/retrieving data in text files is slow and
> so is parsing that data. I've never used XML::Parser but I thought I'd
> give it a spin.
> 
> I hear MySQL is speedy, but it seems to me that it adds complexity to
> such a degree that it may not be an even trade off. I could store data
> in  an XML format in a single field in a MySQL database, but I'd still
> have to parse it.
> 
> As computers keep getting faster, and memory and storage cheaper, isn't
> it beneficial to program in the most simple, human readable, least
> learning required, method?
> 
> In short, I'm lazy. I'd rather code this all in perl. Do I really need
> to learn about and use MySQL or will computers get fast enough that it
> won't matter anyway.
> 
> Kindest Regards,
> 
> Bill Stephenson
> 



Re: [OT] MySQL for Web Apps

2004-02-04 Thread Ian Ragsdale
On Feb 4, 2004, at 1:59 AM, Bill Stephenson wrote:

It occurs to me that the unix os is basically a database in and of 
itself and perl interacts directly with the os, therefore, using it to 
store and retrieve data may not be that inefficient.
I agree with this - you can get good results with a well-planned 
directory structure.

Now, if you have one server dedicated to serving only 2500 users and 
in 2-3 years you have 5000 users and upgrade that server to one twice 
as fast and big, and so on
This is true to a point, but disk drives haven't progressed at nearly 
the rate of CPU/RAM, so you could definitely start running into 
problems like this.

The main disadvantage of using a database engine like MySQL is that 
users cannot define data fields. If other applications are going to 
access the data in question than you must reformat it to provide the 
access. And again, I'm lazy (actually, I have other things I like to 
do) and really don't want to learn more. I'd rather use what I already 
know and leverage what I already have.
Since I don't know exactly what you're building here, it's hard to 
comment, but I agree, that is one area that hasn't been solved very 
well with relational databases, at least not in MySQL.  If some of your 
users want columns different than others, you either need to split the 
tables somehow, or have all the columns (and maybe some extras) you 
think you may ever need available and only expose the ones particular 
users ask for.  If anybody knows a cleaner way to do this, I'd love to 
hear it.

If Rick's "Dream" comes true I can just port the data at that time. 
There are a lot of programmers out there working on faster, easier to 
use, database engines that have more features. Chris, you may be 
right, XML may be a fad, but the next big thing in data 
storage/retrieval could be right around the corner too.

The above are some of the excuses I've come up with to avoid spending 
more time learning stuff. If I'm deluded, it's because I have boxes 
upon boxes of software that doesn't work anymore and time invested in 
each of them. It's not that I don't believe that MySQL and other 
database engines have a place, I'm just trying to avoid learning how 
to use them if I don't really need too.
Personally I think it's worth it in the case of MySQL (or other 
relational databases).  The basics are pretty easily learned in an 
afternoon or two, and as your application and needs change, you'll 
definitely save yourself days worth of work by being able to leverage a 
good DB when your solution really calls for one.

Ian



Re: [OT] MySQL for Web Apps

2004-02-04 Thread Bill Stephenson
You've all made great points and I'm sure that I'll follow the advice 
given but I'll ask you to indulge me just a bit more.

Chris, you've almost convinced me, but I have to ask, is it really so 
inefficient to search through one directory with 5000 sub-directories 
to find one that matches the (user) name your looking for? Isn't that 
what Perl is supposed to be good at?

If you had 100,000 directories you could alphabetize and place them in 
sub-directories that would hold, on average, less than the 5000 
mentioned above.

If each directory held 2000 files and you know the name of the one your 
looking for, it should take half as long to find the file as the 
directory it resides in. If only one user can access the files in their 
directory than locking files and race conditions aren't even an issue. 
Even if you searched for a string to match in each of these 2000 files 
would it take unbearably long?

It occurs to me that the unix os is basically a database in and of 
itself and perl interacts directly with the os, therefore, using it to 
store and retrieve data may not be that inefficient.

Now, if you have one server dedicated to serving only 2500 users and in 
2-3 years you have 5000 users and upgrade that server to one twice as 
fast and big, and so on

The main disadvantage of using a database engine like MySQL is that 
users cannot define data fields. If other applications are going to 
access the data in question than you must reformat it to provide the 
access. And again, I'm lazy (actually, I have other things I like to 
do) and really don't want to learn more. I'd rather use what I already 
know and leverage what I already have.

If Rick's "Dream" comes true I can just port the data at that time. 
There are a lot of programmers out there working on faster, easier to 
use, database engines that have more features. Chris, you may be right, 
XML may be a fad, but the next big thing in data storage/retrieval 
could be right around the corner too.

The above are some of the excuses I've come up with to avoid spending 
more time learning stuff. If I'm deluded, it's because I have boxes 
upon boxes of software that doesn't work anymore and time invested in 
each of them. It's not that I don't believe that MySQL and other 
database engines have a place, I'm just trying to avoid learning how to 
use them if I don't really need too.

If no one here still believes that computers will soon be fast enough 
to write slugware like I mention above, then I'll start cracking the 
books.

Kindest Regards,

Bill Stephenson



Re: [OT] MySQL for Web Apps

2004-02-03 Thread Chris Devers
On Tue, 3 Feb 2004, Bill Stephenson wrote:

> If I am building a web app from the ground up, what's the best way to 
> deal with storing/retrieving data?

It's not by accident that databases have come to be popular for this kind
of work. Pick one -- MySQL, PostgreSQL, SQLite, or something "real" -- and
let it do as much work for you as it can. In the end, you'll be happy. 

IMO, XML is good for certain kinds of data portability. Exchanging
information between different systems -- via XML-RPC, for instance --
would be one example, but things like config files can also make sense, in
that you can write code in different languages to talk to the XML files. 

On the other hand, XML isn't a panacea, and it doesn't make all problems
go away. Flexibility, for example, isn't promised; databases tend to run
circles around XML if you want flexibility .

I think that keeping data in XML makese less sense than keeping it in a
simple database like MySQL -- really, it's not very hard -- and writing
code to wrap the results in XML if some other application needs to share
your data. A good template library (Perl has gobs of 'em) can make doing
that easy -- I hear there's a new O'Reilly book _Perl Template Toolkit_
that even has an XML chapter   :)


> I hear MySQL is speedy, but it seems to me that it adds complexity to 
> such a degree that it may not be an even trade off. I could store data 
> in  an XML format in a single field in a MySQL database, but I'd still 
> have to parse it.

Nah, you may be overthinking this:

use DBI;
$dbh = DBI->connect("dbi:SQLite:dbname=~/salaries.sqlt", "", "",
{ RaiseError => 1, AutoCommit => 0 }); 
eval {
  $dbh->do("INSERT INTO people VALUES (29, 'Nat', 1973)");
  $dbh->do("INSERT INTO people VALUES (30, 'William', 1999)");
  $dbh->do("INSERT INTO father_of VALUES (29, 30)");
  $dbh->commit(  );
};
if ($@) {
  eval { $dbh->rollback(  ) };
  die "Couldn't roll back transaction" if $@;
}

There's your data in a SQLite non-database. Easy so far. 

Here's a subroutine to take a database handle & a data token and use that
token to look for it in the database, formatting it as XML in the process:

sub get_token_from_db {
# Arguments: database handle, person ID number
my ($dbh, $id) = @_;
my $sth = $dbh->prepare('SELECT age FROM people WHERE id = ?')
or die "Couldn't prepare statement: " . $dbh->errstr;

$sth->execute($id) 
or die "Couldn't execute statement: " . $sth->errstr;

my ($age) = $sth->fetchrow_array();
return qq[$age];
}

So, if you're looking for the age of person $id, this will return, say:

28

Edit as needed. Or do it the right way, and abstract the XML you need out
into a template engine of some kind. Either way, the database isn't really
the hard part here once you get started.

> As computers keep getting faster, and memory and storage cheaper, isn't 
> it beneficial to program in the most simple, human readable, least 
> learning required, method?

*meh*

There's something to be said for keeping things in a format that's easy to
access, but computers aren't yet so advanced that software efficiency
doesn't matter any more. An efficient to store and manage format with a
clean access layer probably makes at least as much sense as a plain ASCII,
Unicode, XML, etc format that doesn't come with the management tools. 
 
> In short, I'm lazy. I'd rather code this all in perl. Do I really need 
> to learn about and use MySQL or will computers get fast enough that it 
> won't matter anyway.

No, you don't need to, but honestly, IMO the plan your considering is
workable but unlikely to be as easy in the long run if you just go with
some kind of database approach now. 

Databases have become pretty well entrenched in the past 30 years or so;
XML has been a big buzzword for about 5. 

I don't think databases are going to go away any time soon;
I'm not yet convinced that XML isn't a flash in the pan.



-- 
Chris Devers




Re: [OT] MySQL for Web Apps

2004-02-03 Thread Rick Measham
On 4 Feb 2004, at 03:39 pm, kynan wrote:
The idea of having XML in the DB is sound though, if you do it 
thoughtfully.
So long as you're not planning on searching on it or indexing it or ...

I once used XML to store information about a webpage as a PostGreSQL 
field ... but later down the track I wanted to search on some of that 
data and had to retrieve all records that contained 'bob' and then 
parse the XML and check that 'bob' was in the byline rather than just 
having his name in the content.



I'd love to see an XML parser embedded into SQL so that I can have:
CREATE TABLE aTable (id serial, data XML);
Then I can:
SELECT id FROM aTable WHERE data:story:byline = 'Bob';
Which would return the id of any record whose data field looked 
something like:
BobContentBobContent of another 
story

But wouldn't return the id where the data looked like:
NoraBob is a 
dudeBillContent 
of another story

even though 'bob' is in the text.

Basically the SQL engine would recognise that 'data' is an XML field 
and could search it according to requirements.



Rick Measham
Senior Designer and Developer
Printaform Pty Ltd
Tel: (03) 9850 3255
Fax: (03) 9850 3277
http://www.printaform.com.au
http://www.printsupply.com.au
vcard: http://www.printaform.com.au/staff/rickm.vcf


Re: [OT] MySQL for Web Apps

2004-02-03 Thread Rick Measham
On 4 Feb 2004, at 03:16 pm, Bill Stephenson wrote:
As computers keep getting faster, and memory and storage cheaper,  
isn't it beneficial to program in the most simple, human readable,  
least learning required, method?
Never. You're not going to ever read each 2500 user's 2000 x 40kb  
records thus it's better to store it in a way that the computer can  
access it.


In short, I'm lazy. I'd rather code this all in perl. Do I really need  
to learn about and use MySQL or will computers get fast enough that it  
won't matter anyway.
Once again, no. Especially with where you're starting. 3 users and you  
might be OK. Have a think about this:
You have a file:
1Bob2Nancy

The first thing you're going to do with XML::Parser is turn the XML  
into a perl data structure:
@data = (
 {  id => 1,
   name => 'Bob'
 },
 {
   id => 2,
   name => 'Nancy'
 }
);
Then you'll look through each element of the list looking for name eq  
'Bob'. All you'll do all that in perl. Multiply the files by 2500 users  
then multiply by 40k of information. Perl wouldn't even be able to  
start to store it all.

On the other hand if you store the data directly in a database which  
has been optimised for quick searching and already has all the methods  
you'll ever require to store and retrieve data, you'll be running it as  
a compiled program. It will run a LOT faster and it will do it's job a  
LOT better.

It will also handle data-locking so you and I can't both be writing to  
a file at the same time.

In short there's no question about which is the better option.  
Databases are quicker and safer.

Cheers!
Rick


Rick Measham
Senior Designer and Developer
Printaform Pty Ltd
Tel: (03) 9850 3255
Fax: (03) 9850 3277
http://www.printaform.com.au
http://www.printsupply.com.au
vcard: http://www.printaform.com.au/staff/rickm.vcf


Re: [OT] MySQL for Web Apps

2004-02-03 Thread kynan
Well, I'd vote for MySQL. That amount of hits seems way to heavy to 
leave it all to the server.

I guess also it depends a bit on your data and the nature of your 
queries too. With MySQL you get the advantages of a relational 
database, so you can put your data sources together on the fly by 
joining tables as you need them. That makes your data manipulation much 
simpler. Also SQL is a very efficient language. Much better to process 
your query in the database than churn through heaps of result data in 
perl. And you don't have to rewrite the whole XML file everytime you 
want to change some piece of data in the middle of it. With 100,000 
users you're going to have some big XML files of a real lot of small 
ones - sounds complicated and flakey to me.

The idea of having XML in the DB is sound though, if you do it 
thoughtfully.

Anyway, that's my opinion but I use MySQL a lot and I'm not so 
brilliant with perl.

K

On 04/02/2004, at 3:16 PM, Bill Stephenson wrote:

If you're busy please forgive me and ignore this, if you have time to 
offer an opinion I'd really like to hear from this list on this 
subject;

If I am building a web app from the ground up, what's the best way to 
deal with storing/retrieving data? For arguments sake let's say the 
app will have 2500 users to begin with that each hit the server an 
average of 50 times a week. Each request delivers 40k of data. Users 
can search through their saved records where each record contains 
5-50k of data. Users can have up to 2000 records. In 5 years the app 
will have, maybe, 25,000 users. In 10 years, say, 100,000 users. If it 
ever has more users than that, I'll write a help wanted message.

I'd like to store using XML in a separate text file for each record 
created because it's easy and gives me flexibility. I can add data 
fields without tweaking tables in a MySQL database. I can add users 
easily and keep their data in a separate directory that is easy to 
locate. I'm told that storing/retrieving data in text files is slow 
and so is parsing that data. I've never used XML::Parser but I thought 
I'd give it a spin.

I hear MySQL is speedy, but it seems to me that it adds complexity to 
such a degree that it may not be an even trade off. I could store data 
in  an XML format in a single field in a MySQL database, but I'd still 
have to parse it.

As computers keep getting faster, and memory and storage cheaper, 
isn't it beneficial to program in the most simple, human readable, 
least learning required, method?

In short, I'm lazy. I'd rather code this all in perl. Do I really need 
to learn about and use MySQL or will computers get fast enough that it 
won't matter anyway.

Kindest Regards,

Bill Stephenson


+

Kynan Hughes

phone 9281 2088
fax 9211 4433
mobile 0411 231099
Additive design pty ltd
Amitabha pty ltd
http://www.additive.net.au
Level 4, 104 Commonwealth St
Surry Hills NSW 2010
Australia
+



Re: [OT] MySQL for Web Apps

2004-02-03 Thread Ian Ragsdale
On Feb 3, 2004, at 10:16 PM, Bill Stephenson wrote:
I'd like to store using XML in a separate text file for each record 
created because it's easy and gives me flexibility. I can add data 
fields without tweaking tables in a MySQL database. I can add users 
easily and keep their data in a separate directory that is easy to 
locate. I'm told that storing/retrieving data in text files is slow 
and so is parsing that data. I've never used XML::Parser but I thought 
I'd give it a spin.

I hear MySQL is speedy, but it seems to me that it adds complexity to 
such a degree that it may not be an even trade off. I could store data 
in  an XML format in a single field in a MySQL database, but I'd still 
have to parse it.
In my experience you'll be just fine using XML with that amount of 
data, but I would try to come up with some simple tests searching 
through sample data to see if it really meets your performance needs.  
On the other hand, I'd still consider using MySQL - it's really not 
that complex, and you gain a lot of flexibility.  By that I mean once 
everything is in a group of tables, you can then do lots of ad-hoc 
queries on it to find out useful information, in a much easier way than 
writing a perl script every time you want to know something.  If you 
can handle perl programming, you'll probably be able to learn enough 
about MySQL and the perl DBI interface to be doing useful stuff in less 
than a day.  There are a ton of tutorials out there and the MySQL 
manual is excellent.

Ian



[OT] MySQL for Web Apps

2004-02-03 Thread Bill Stephenson
If you're busy please forgive me and ignore this, if you have time to 
offer an opinion I'd really like to hear from this list on this 
subject;

If I am building a web app from the ground up, what's the best way to 
deal with storing/retrieving data? For arguments sake let's say the app 
will have 2500 users to begin with that each hit the server an average 
of 50 times a week. Each request delivers 40k of data. Users can search 
through their saved records where each record contains 5-50k of data. 
Users can have up to 2000 records. In 5 years the app will have, maybe, 
25,000 users. In 10 years, say, 100,000 users. If it ever has more 
users than that, I'll write a help wanted message.

I'd like to store using XML in a separate text file for each record 
created because it's easy and gives me flexibility. I can add data 
fields without tweaking tables in a MySQL database. I can add users 
easily and keep their data in a separate directory that is easy to 
locate. I'm told that storing/retrieving data in text files is slow and 
so is parsing that data. I've never used XML::Parser but I thought I'd 
give it a spin.

I hear MySQL is speedy, but it seems to me that it adds complexity to 
such a degree that it may not be an even trade off. I could store data 
in  an XML format in a single field in a MySQL database, but I'd still 
have to parse it.

As computers keep getting faster, and memory and storage cheaper, isn't 
it beneficial to program in the most simple, human readable, least 
learning required, method?

In short, I'm lazy. I'd rather code this all in perl. Do I really need 
to learn about and use MySQL or will computers get fast enough that it 
won't matter anyway.

Kindest Regards,

Bill Stephenson