php-general Digest 24 Nov 2007 16:38:40 -0000 Issue 5145

Topics (messages 264987 through 264997):

Performance question for table updating
        264987 by: Jon Westcot
        264988 by: Andrés Robinet

Re: File handling and different character sets
        264989 by: Andrés Robinet

Re: Performance question for table updating (SOLVED)
        264990 by: Jon Westcot
        264995 by: Bastien Koert

Re: quicktime new window php
        264991 by: Bill Guion
        264992 by: kNish
        264993 by: Jochem Maas
        264994 by: Luca Paolella

Re: Basic question - PHP usage of SVG files [SOLVED]
        264996 by: tedd

Re: Basic question - PHP usage of SVG files
        264997 by: Al

Administrivia:

To subscribe to the digest, e-mail:
        [EMAIL PROTECTED]

To unsubscribe from the digest, e-mail:
        [EMAIL PROTECTED]

To post to the list, e-mail:
        [EMAIL PROTECTED]


----------------------------------------------------------------------
--- Begin Message ---
Hi all:

    For those who've been following the saga, I'm working on an application 
that needs to load a data file consisting of approximately 29,000 to 35,000 
records in it (and not short ones, either) into several tables.  I'm using 
MySQL as the database.

    I've noticed a really horrible performance difference between INSERTing 
rows into the table and UPDATEing rows with new data when they already exist in 
the table.  For example, when I first start with an empty table, the 
application inserts around 29,600 records in something less than 6 minutes.  
But, when I use the second file, updating that same table takes over 90 minutes.

    Here's my question: I had assumed -- probably wrongly -- that it would be 
far more expedient to only update rows where data had actually changed; 
moreover, that I should only update the changed fields in the particular rows.  
This involves a large number of if statements, i.e.,

    if($old_row["field_a"] !== $new_row["field_66"] {
        $update_query .= "field_a = '" . 
mysql_real_escape_string($new_row["field_66"]) . "',";
    }

    Eventually, I wind up with a query similar to:

        UPDATE table_01 SET field_a = 'New value here', updated=CURDATE() WHERE 
primary_key=12345

    I thought that, to keep the table updating to a minimum, this approach made 
the most sense.  However, seeing the two hugely different performance times has 
made me question whether or not it would be faster to simply update every field 
in the table and eliminate all of these test conditions.

    And, before someone comments that indexes on the table can cause 
performance hits, I DROP nearly all of the indexes at the start of the 
processing, only keeping those indexes necessary to do the original INSERT or 
the subsequent UPDATE, and then add all of the extra "steroid" indexes (you 
know -- the performance-enhancing ones <g>) after all of the INSERTs and 
UPDATEs have been finished.

    So, long story short (oops -- too late!), what's the concensus among the 
learned assembly here?  Is it faster to just UPDATE the record if it already 
exists regardless of the fact that maybe only one or two out of 75 or more 
fields changed versus testing each one of those 75 fields to try and figure out 
which ones actually changed and then only update those?

    I look forward to reading all of your thoughts.

    Sincerely,

        Jon

--- End Message ---
--- Begin Message ---
> -----Original Message-----
> From: Jon Westcot [mailto:[EMAIL PROTECTED]
> Sent: Saturday, November 24, 2007 4:32 AM
> To: PHP General
> Subject: [PHP] Performance question for table updating
> 
> Hi all:
> 
>     For those who've been following the saga, I'm working on an
> application that needs to load a data file consisting of approximately
> 29,000 to 35,000 records in it (and not short ones, either) into
> several tables.  I'm using MySQL as the database.
> 
>     I've noticed a really horrible performance difference between
> INSERTing rows into the table and UPDATEing rows with new data when
> they already exist in the table.  For example, when I first start with
> an empty table, the application inserts around 29,600 records in
> something less than 6 minutes.  But, when I use the second file,
> updating that same table takes over 90 minutes.
> 
>     Here's my question: I had assumed -- probably wrongly -- that it
> would be far more expedient to only update rows where data had actually
> changed; moreover, that I should only update the changed fields in the
> particular rows.  This involves a large number of if statements, i.e.,
> 
>     if($old_row["field_a"] !== $new_row["field_66"] {
>         $update_query .= "field_a = '" .
> mysql_real_escape_string($new_row["field_66"]) . "',";
>     }
> 
>     Eventually, I wind up with a query similar to:
> 
>         UPDATE table_01 SET field_a = 'New value here',
> updated=CURDATE() WHERE primary_key=12345
> 
>     I thought that, to keep the table updating to a minimum, this
> approach made the most sense.  However, seeing the two hugely different
> performance times has made me question whether or not it would be
> faster to simply update every field in the table and eliminate all of
> these test conditions.
> 
>     And, before someone comments that indexes on the table can cause
> performance hits, I DROP nearly all of the indexes at the start of the
> processing, only keeping those indexes necessary to do the original
> INSERT or the subsequent UPDATE, and then add all of the extra
> "steroid" indexes (you know -- the performance-enhancing ones <g>)
> after all of the INSERTs and UPDATEs have been finished.
> 
>     So, long story short (oops -- too late!), what's the concensus
> among the learned assembly here?  Is it faster to just UPDATE the
> record if it already exists regardless of the fact that maybe only one
> or two out of 75 or more fields changed versus testing each one of
> those 75 fields to try and figure out which ones actually changed and
> then only update those?
> 
>     I look forward to reading all of your thoughts.
> 
>     Sincerely,
> 
>         Jon

I don't know about consensus over here because I'm kind of newgie (stands
for new geek, as opposed to newbie which stands for new ball breaker :D :D
). I don't know of your previous messages but I can tell you one story...
Some time ago I got involved in a project that required geo-distance
calculation (you know distance between two points with latitude and
longitude). Basically I had to take a set of points and calculate the
distance of each of those points to a given (reference) one. The math was
something like the "square root of the sum of a constant times the square
sin of..." well, I can't remember it, but the point is, it was a complicated
formula, which I thought it would allow for some optimizations in PHP.
Accustomed to regular (compiled) programming languages I developed a set of
routines to optimize the task and went ahead and queried the database for
the (say, 1000 records) dataset of points. Then applied the math to the
points and the reference point and got the result... in about 5 minutes to
my (disgusting) surprise.
Then I grabbed the MySQL manual, built a "non-optimized" version of the
formula to put directly in the SQL query and get the "shortest distance"
(which was my goal in the end) calculated by MySQL right away. I thought
"ok, I'll prepare a cup of coffee to wait for MySQL to finish the
calculation". To my surprise the query returned the expected result in less
than 2 seconds.
My logic was (wrongly) the following: PHP is a programming language, SQL is
a data access language; I'll get the data using MySQL and do the math using
PHP. But I forgot PHP is an interpreted language, that a number is more than
a number to PHP, but a ZVAL_<whatever> object behind the scenes. I forgot
about the memory and the time required to build those objects when one
retrieves data out of a database server. I forgot about parsing time, and
"support logic and safety checks" in the language that overkill any attempt
to build TDCPL (Too Damn Complex Programming Logic) in PHP.
So, now, when I have to do some logic stuff to the retrieved data, I first
check "how much" I can push into the query itself, to get little or nothing
of programming logic in PHP after retrieving (before storing) the data.
All that said, I'd give a shot to the MySQL REPLACE function (I wouldn't
even branch the code to use INSERT or UPDATE depending on the record already
existing or not, If you have a primary key, all you need is REPLACE). But,
PLEASE LOOK AT THE GOTCHAS (like set col_name=col_name+1). Furthermore, If
those data files were to be uploaded by me (I mean, me, the coder, not the
end user), I'd build (use) a program to convert them to SQL sentences in my
desktop PC where I can use faster programming languages and I can wait for
five minutes of heavy processing (instead of overkilling the server for five
minutes which will slow down every other service in there).
In the end it depends on your requirements and where you get the data from
and if and how you want to automate the task (I didn't get your previous
messages, I got subscribed recently, if you can send me a link to those
ones... great!)

Rob

--- End Message ---
--- Begin Message ---
> -----Original Message-----
> From: Per Eriksson [mailto:[EMAIL PROTECTED]
> Sent: Friday, November 23, 2007 7:15 AM
> To: [EMAIL PROTECTED]
> Subject: [PHP] File handling and different character sets
> 
> Hi,
> 
> I would like to know how you work with the PHP Directory Functions and
> different character sets. If I am having a professional view,
> well-written code should be able to handle file systems in different
> character sets.
> 
> http://se.php.net/manual/sv/ref.dir.php
> 
> Is there a way to write code for listing files from a ISO-8859-1 on a
> UTF-8 page? I haven't succeeded with this.
> 
> 
> Thank you,
> 
> Best Regards
> 
> Per Eriksson
> per.eriksson A exist ! se
> 
> --
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php

Hi Per,

I'm just curious, no matter what encoding I choose (IE and FF switch
automatically to UTF-8 as per the page metatag and content-type header) I
get funny characters at http://se.php.net/manual/sv/ref.dir.php, I don't
know if this is because of the default browser font, because I've tried
several ones. My system is Windows XP SP2 Spanish version, but I don't think
that's the cause either as it is up to date, and I have even installed
support for right to left writing...
Ok, I know I can just use wget, save the result and open it in a binary
editor to see what are the actual bytes and check for the encoding (I
won't... I'm kind of lazy today :D )

Regarding your question, I have these functions I copied from the notes to
the extended CHM version of the PHP manual, they are at the
mb_convert_encoding function reference and should be in the online version
of the manual as well (won't check it... too lazy, I told you)...

[snip]
volker at machon dot biz (25-Sep-2007 05:05)

Hey guys. For everybody who's looking for a function that is converting an
iso-string to utf8 or an utf8-string to iso, here's your solution:
public function encodeToUtf8($string) {
    return mb_convert_encoding($string, "UTF-8", mb_detect_encoding($string,
"UTF-8, ISO-8859-1, ISO-8859-15", true));
}
public function encodeToIso($string) {
    return mb_convert_encoding($string, "ISO-8859-1",
mb_detect_encoding($string, "UTF-8, ISO-8859-1, ISO-8859-15", true));
}
For me these functions are working fine. Give it a try
[/snip]

The first thing to test for would be if the directory/filesystem functions
are retrieving data encoded in ISO-8859-1 or not (I guess it depends on the
OS, but you might know better), otherwise mb_convert_encoding would act like
"double escaping" or "double urlencoding" (a known issue for all of us,
ha?). That's why encodeToUtf8 uses mb_detect_encoding first... anyway, I
wonder if mb_detect_encoding can guarantee you anything other than the byte
stream of data being valid in the given character set(s). So... what do you
think, did you get any further results about this? And also, do you have any
code sample you are working on to share?

Regards,

Rob


Andrés Robinet | Lead Developer | BESTPLACE CORPORATION
5100 Bayview Drive 206, Royal Lauderdale Landings, Fort Lauderdale, FL 33308
| TEL 954-607-4207 | FAX 954-337-2695 | 
Email: [EMAIL PROTECTED]  | MSN Chat: [EMAIL PROTECTED]  |  SKYPE:
bestplace |  Web: bestplace.biz  | Web: seo-diy.com

--- End Message ---
--- Begin Message ---
Hi Rob, et al.:

----- Original Message -----
From: "Andrés Robinet" <[EMAIL PROTECTED]>
> > -----Original Message-----
> > From: Jon Westcot [mailto:[EMAIL PROTECTED]
> >
> > :: gigantic snip here::
> >
> >     So, long story short (oops -- too late!), what's the concensus
> > among the learned assembly here?  Is it faster to just UPDATE the
> > record if it already exists regardless of the fact that maybe only one
> > or two out of 75 or more fields changed versus testing each one of
> > those 75 fields to try and figure out which ones actually changed and
> > then only update those?
> >
> >     I look forward to reading all of your thoughts.
> >
> >     Sincerely,
> >
> >         Jon
>
> I don't know about consensus over here because I'm kind of newgie (stands
> for new geek, as opposed to newbie which stands for new ball breaker :D :D
> ). I don't know of your previous messages but I can tell you one story...
> Some time ago I got involved in a project that required geo-distance
> calculation (you know distance between two points with latitude and
> longitude). Basically I had to take a set of points and calculate the
> distance of each of those points to a given (reference) one. The math was
> something like the "square root of the sum of a constant times the square
> sin of..." well, I can't remember it, but the point is, it was a
complicated
> formula, which I thought it would allow for some optimizations in PHP.
> Accustomed to regular (compiled) programming languages I developed a set
of
> routines to optimize the task and went ahead and queried the database for
> the (say, 1000 records) dataset of points. Then applied the math to the
> points and the reference point and got the result... in about 5 minutes to
> my (disgusting) surprise.
> Then I grabbed the MySQL manual, built a "non-optimized" version of the
> formula to put directly in the SQL query and get the "shortest distance"
> (which was my goal in the end) calculated by MySQL right away. I thought
> "ok, I'll prepare a cup of coffee to wait for MySQL to finish the
> calculation". To my surprise the query returned the expected result in
less
> than 2 seconds.
> My logic was (wrongly) the following: PHP is a programming language, SQL
is
> a data access language; I'll get the data using MySQL and do the math
using
> PHP. But I forgot PHP is an interpreted language, that a number is more
than
> a number to PHP, but a ZVAL_<whatever> object behind the scenes. I forgot
> about the memory and the time required to build those objects when one
> retrieves data out of a database server. I forgot about parsing time, and
> "support logic and safety checks" in the language that overkill any
attempt
> to build TDCPL (Too Damn Complex Programming Logic) in PHP.
> So, now, when I have to do some logic stuff to the retrieved data, I first
> check "how much" I can push into the query itself, to get little or
nothing
> of programming logic in PHP after retrieving (before storing) the data.
> All that said, I'd give a shot to the MySQL REPLACE function (I wouldn't
> even branch the code to use INSERT or UPDATE depending on the record
already
> existing or not, If you have a primary key, all you need is REPLACE). But,
> PLEASE LOOK AT THE GOTCHAS (like set col_name=col_name+1). Furthermore, If
> those data files were to be uploaded by me (I mean, me, the coder, not the
> end user), I'd build (use) a program to convert them to SQL sentences in
my
> desktop PC where I can use faster programming languages and I can wait for
> five minutes of heavy processing (instead of overkilling the server for
five
> minutes which will slow down every other service in there).
> In the end it depends on your requirements and where you get the data from
> and if and how you want to automate the task (I didn't get your previous
> messages, I got subscribed recently, if you can send me a link to those
> ones... great!)
>
> Rob

    Thanks for the comments and suggestions.  Prior to receiving your note,
I went back and did a bit of checking on my code.  Turns out that the
problem was "hardware" related -- the infamous "Loose Screw Behind the
Keyboard."

    The problem actually boiled down to two quotation marks -- they were
present in the search code to see if a record with the specified key
existed, but were omitted in the WHERE clause of the UPDATE statement.  Said
update therefore refused to use the nice little index I'd provided for its
use and instead scanned through the entire table to find the record in
question.

    Moral of the story?  Two, really.  First, ensure you always reference
values in the way most appropriate for their type.  Second, don't make your
idiocy public by asking stupid questions on a public forum. <g>  What's the
quote (probably attributed to Churchill)?  "It is better to be ignorant and
silent than to voice one's opinions and remove all doubt." ;)

    Jon

--- End Message ---
--- Begin Message ---
Could there be some performance gain by uploading the data to another table and 
then update / insert via sql?

bastien

----------------------------------------
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Date: Sat, 24 Nov 2007 04:03:53 -0700
> Subject: Re: [PHP] Performance question for table updating (SOLVED)
> 
> Hi Rob, et al.:
> 
> ----- Original Message -----
> From: "Andrés Robinet" 
>>> -----Original Message-----
>>> From: Jon Westcot [mailto:[EMAIL PROTECTED]
>>>
>>> :: gigantic snip here::
>>>
>>>     So, long story short (oops -- too late!), what's the concensus
>>> among the learned assembly here?  Is it faster to just UPDATE the
>>> record if it already exists regardless of the fact that maybe only one
>>> or two out of 75 or more fields changed versus testing each one of
>>> those 75 fields to try and figure out which ones actually changed and
>>> then only update those?
>>>
>>>     I look forward to reading all of your thoughts.
>>>
>>>     Sincerely,
>>>
>>>         Jon
>>
>> I don't know about consensus over here because I'm kind of newgie (stands
>> for new geek, as opposed to newbie which stands for new ball breaker :D :D
>> ). I don't know of your previous messages but I can tell you one story...
>> Some time ago I got involved in a project that required geo-distance
>> calculation (you know distance between two points with latitude and
>> longitude). Basically I had to take a set of points and calculate the
>> distance of each of those points to a given (reference) one. The math was
>> something like the "square root of the sum of a constant times the square
>> sin of..." well, I can't remember it, but the point is, it was a
> complicated
>> formula, which I thought it would allow for some optimizations in PHP.
>> Accustomed to regular (compiled) programming languages I developed a set
> of
>> routines to optimize the task and went ahead and queried the database for
>> the (say, 1000 records) dataset of points. Then applied the math to the
>> points and the reference point and got the result... in about 5 minutes to
>> my (disgusting) surprise.
>> Then I grabbed the MySQL manual, built a "non-optimized" version of the
>> formula to put directly in the SQL query and get the "shortest distance"
>> (which was my goal in the end) calculated by MySQL right away. I thought
>> "ok, I'll prepare a cup of coffee to wait for MySQL to finish the
>> calculation". To my surprise the query returned the expected result in
> less
>> than 2 seconds.
>> My logic was (wrongly) the following: PHP is a programming language, SQL
> is
>> a data access language; I'll get the data using MySQL and do the math
> using
>> PHP. But I forgot PHP is an interpreted language, that a number is more
> than
>> a number to PHP, but a ZVAL_ object behind the scenes. I forgot
>> about the memory and the time required to build those objects when one
>> retrieves data out of a database server. I forgot about parsing time, and
>> "support logic and safety checks" in the language that overkill any
> attempt
>> to build TDCPL (Too Damn Complex Programming Logic) in PHP.
>> So, now, when I have to do some logic stuff to the retrieved data, I first
>> check "how much" I can push into the query itself, to get little or
> nothing
>> of programming logic in PHP after retrieving (before storing) the data.
>> All that said, I'd give a shot to the MySQL REPLACE function (I wouldn't
>> even branch the code to use INSERT or UPDATE depending on the record
> already
>> existing or not, If you have a primary key, all you need is REPLACE). But,
>> PLEASE LOOK AT THE GOTCHAS (like set col_name=col_name+1). Furthermore, If
>> those data files were to be uploaded by me (I mean, me, the coder, not the
>> end user), I'd build (use) a program to convert them to SQL sentences in
> my
>> desktop PC where I can use faster programming languages and I can wait for
>> five minutes of heavy processing (instead of overkilling the server for
> five
>> minutes which will slow down every other service in there).
>> In the end it depends on your requirements and where you get the data from
>> and if and how you want to automate the task (I didn't get your previous
>> messages, I got subscribed recently, if you can send me a link to those
>> ones... great!)
>>
>> Rob
> 
>     Thanks for the comments and suggestions.  Prior to receiving your note,
> I went back and did a bit of checking on my code.  Turns out that the
> problem was "hardware" related -- the infamous "Loose Screw Behind the
> Keyboard."
> 
>     The problem actually boiled down to two quotation marks -- they were
> present in the search code to see if a record with the specified key
> existed, but were omitted in the WHERE clause of the UPDATE statement.  Said
> update therefore refused to use the nice little index I'd provided for its
> use and instead scanned through the entire table to find the record in
> question.
> 
>     Moral of the story?  Two, really.  First, ensure you always reference
> values in the way most appropriate for their type.  Second, don't make your
> idiocy public by asking stupid questions on a public forum.   What's the
> quote (probably attributed to Churchill)?  "It is better to be ignorant and
> silent than to voice one's opinions and remove all doubt." ;)
> 
>     Jon
> 
> -- 
> PHP General Mailing List (http://www.php.net/)
> To unsubscribe, visit: http://www.php.net/unsub.php
> 

_________________________________________________________________
Send a smile, make someone laugh, have some fun! Start now!
http://www.freemessengeremoticons.ca/?icid=EMENCA122

--- End Message ---
--- Begin Message ---
At 5:26 PM +0530 11/23/07, kNish wrote:

Hi,

          How is it possible to have a hyper link open a new quicktime window


BRgds,

kNish

Not sure if this is what you are looking for or not, but if your html code looks like <a href="link-to-page" target=_top>page name</a>, the page will open in either a new page or a new tab. That choice is dependent on the local browser configuration.

     -----===== Bill =====-----
--

Do not confuse liberty with license.

--- End Message ---
--- Begin Message ---
Hi,

                 Given that a pc has quicktime loaded, how is it
possible to have a hyper link open a new quicktime window. i.e. a
window that has a .mov file playing. Along with it, it has the play
stop pause options too.

BRgds,

kNish

On 11/24/07, marco <[EMAIL PROTECTED]> wrote:
>  you mean's that just open a quicktime window ? expect with a clip?
>
> 2007/11/23, kNish <[EMAIL PROTECTED]>:
> >
> > Hi,
> >
> >          How is it possible to have a hyper link open a new quicktime
> > window
> >
> >
> > BRgds,
> >
> > kNish
> >
> > --
> > PHP General Mailing List (http://www.php.net/)
> > To unsubscribe, visit: http://www.php.net/unsub.php
> >
> >
>

--- End Message ---
--- Begin Message ---
kNish wrote:
> Hi,
> 
>                  Given that a pc has quicktime loaded, how is it
> possible to have a hyper link open a new quicktime window. i.e. a
> window that has a .mov file playing. Along with it, it has the play
> stop pause options too.

this is nothing to do with php. please find a suitable HTML or JS mailing list
for such questions.

> 
> BRgds,
> 
> kNish
> 
> On 11/24/07, marco <[EMAIL PROTECTED]> wrote:
>>  you mean's that just open a quicktime window ? expect with a clip?
>>
>> 2007/11/23, kNish <[EMAIL PROTECTED]>:
>>> Hi,
>>>
>>>          How is it possible to have a hyper link open a new quicktime
>>> window
>>>
>>>
>>> BRgds,
>>>
>>> kNish
>>>
>>> --
>>> PHP General Mailing List (http://www.php.net/)
>>> To unsubscribe, visit: http://www.php.net/unsub.php
>>>
>>>
> 

--- End Message ---
--- Begin Message ---
This isn't related to php, so i think you should ask somewhere else...

Anyway, to answer your question, if I may, you can't guarantee what you're asking: that's because most browsers will open a new browser window containing a player which IS Quicktime, but has a different appearance. IMHO the only way to be sure that the video will be opened with Quicktime is to have the user download it using some "Save linked file..." option and then opening it.

On Nov 24, 2007, at 2:17 PM, kNish wrote:

Hi,

                 Given that a pc has quicktime loaded, how is it
possible to have a hyper link open a new quicktime window. i.e. a
window that has a .mov file playing. Along with it, it has the play
stop pause options too.

BRgds,

kNish

On 11/24/07, marco <[EMAIL PROTECTED]> wrote:
 you mean's that just open a quicktime window ? expect with a clip?

2007/11/23, kNish <[EMAIL PROTECTED]>:

Hi,

How is it possible to have a hyper link open a new quicktime
window


BRgds,

kNish

--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php




--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


--- End Message ---
--- Begin Message ---
At 1:14 PM +0900 11/24/07, Dave M G wrote:
Larry,

Thanks for your advice.

With the XML editor available within PHP, I've made a small script that can extract the point data inside an SVG file, and store them as an array of points.

That array can then be used to draw and fill shapes in a PNG image. And since they are stored as an array, I can do a little math on them to manipulate their scale and whatnot.

The interpolation between points gets lost with this method, but in this case, I can get by with straight lines.

Thanks for helping me to see the value in using PHP's XML functions.

--
Dave M G


Dave:

If you want to smooth those points out, you can do it with a bezier or catmull spline.

The code for catmull spline can be found here:

http://brlcad.org/doxygen/d7/ddd/rt__dspline_8c-source.html

I would love to see it translated into php.

I have a copy of bezier code in C -- contact me off-list for a copy.

Cheers,

tedd


--
-------
http://sperling.com  http://ancientstones.com  http://earthstones.com

--- End Message ---
--- Begin Message ---
Imagemagick php extension http://www.php.net/manual/en/ref.imagick.php

Works great.

Dave M G wrote:
PHP list,

I have some images that are in SVG format. What I want to do with them is manipulate them by resizing and overlaying one on top of the other.

I do this frequently with PNG images, and I could first convert these images to PNG before manipulating them in PHP.

However, I'd like to preserve line quality by keeping them as SVG until the last moment.

I can't see on the online documentation if SVG is supported and if it requires different commands than raster image formats.

What is the support for SVG in PHP, and where is the online documentation for it?

Thank you for your help.


--- End Message ---

Reply via email to