when i try to insert the string "
http://vids.myspace.com/index.cfm?fuseaction=vids.individual&videoid=2012774576";,
it returns
"error : Duplicate entry '
http://vids.myspace.com/index.cfm?fuseaction=vids.individual&videoid=' for
key 3"
when i check the table and do a search for the string, there
On 3/6/07, abhishek jain <[EMAIL PROTECTED]> wrote:
On 3/6/07, Nils Meyer <[EMAIL PROTECTED]> wrote:
> Hi,
>
> abhishek jain wrote:
> > I am having a database with varchar(255) columns named title,
> > extra_info1,extra_info2,extra_info3 .
> > I want to search all these columns with a search
Brian, the online MySQL documentation is very complete and easy to read.
That said, you do kind of have to know what you're looking for! I'm not
sure what to recommend for a guide to beginning SQL, sorry, others may have
some thoughts.
You are going down the right road with an aggregate function
I ended up figuring this out. If anyone ever needs it, this works well
select module_id, GROUP_CONCAT(participant_answer SEPARATOR ' ') as answers
from participants_answers
where email = '[EMAIL PROTECTED]'
group by module_id
-Brian
-Original Message-
From: Brian Menke [mailto:[EMAIL
thanks for your response!
Hope engineers of MySQL AB can improve it.
Nils Meyer <[EMAIL PROTECTED]> 写道:
Hi Xian,
xian liu wrote:
> ERROR 1295 (HY000): This command is not supported in the prepared statement
> protocol yet
> mysql> drop procedure ct_tb//
> Query OK, 0 rows affected (0.
MySQL 5.x
I have a table that looks like this:
module_id question_id email participant_answer
2 2.1 [EMAIL PROTECTED] a
2 2.2 [EMAIL PROTECTED] b
2 2.3 [EMAIL PROTECTED]
Hi,
When I try to run this function, I receive ERROR 1146 (42S02): Table
'gi2.meta' doesn't exist
CREATE FUNCTION get_version()
RETURNS INT UNSIGNED
BEGIN
DECLARE exist_ TINYINT;
SELECT COUNT(*) INTO exist_ FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'GI2' AND TABLE_NAME = 'Meta' L
I'd given some thought to this a while ago. The only way you are going
to be able to tell if a row changes is to have a date column on every
oracle table that indicates the last time the data changed.
You'll need some program to start up that knows the last time it ran,
and the current date, a
> hehe, well said and sorry for the top-posting.
> I can only agree, both methods have their merits! :)
>
> Alec
Seems I started quite a debate ;)
I wanted to thank you all again for your opinion and for planting a new seed
of doubt on which direction I'll go ;)
I setup the db as proposed earlier
Hello everyone,
I had a quick question...I am looking to move away from our dependence on
Oracle over to using a Mysql Cluster. Due to the complexity of the move it
will have to happen over a period of time, what I would like to do is keep our
mysql database in sync with our Oracle DBthis w
Kevin Hunter wrote:
Grrr. All you lazy top-posters! ;) It seems to me that a case can
be legitimately made for both methods of handling BLOBs. On the one
hand, where speed and/or efficiency (on many different levels) are the
top priorities, it'd be good to keep the DB as trim as possible.
On 07 Mar 2007 at 3:57p -0500, Alexander Lind wrote:
imagine a large system where pdf-files are accessed by clients a lot.
say 1 pdf file is access per second on average.
also say that your database is on a machine separate from the
webserver(s) (as is common).
do you really think its a good
I've built systems than stream tons of data via this method, at times
into some impressive requests per second. Also I've exposed files stored
in this manner via a ftp interface with servers able to deliver near wire
speed data in and out of the db storage.
When your into a load balanced environ
imagine a large system where pdf-files are accessed by clients a lot.
say 1 pdf file is access per second on average.
also say that your database is on a machine separate from the
webserver(s) (as is common).
do you really think its a good idea to pump the pdf data from the db
each time it ne
I have to disagree with most, I would store the entire file in the
database, metadata and all. Better security, if you have a backend
database, it's much harder to get the data than pdf's sitting in a
directory on the webserver. Plus if you ever want to scale to a
multi-webserver environment, th
Here's a great article on how to store pdf/whatever binary as blob chunks:
http://www.dreamwerx.net/phpforum/?id=1
On Wed, 7 Mar 2007, Jay Pipes wrote:
> Ed wrote:
> > Hi All,
> > I'm trying to figure out how to put a pdf file into a blob field.
> >
> > I guess a pdf file is a binnary file an
On Wednesday 07 March 2007 21:26, Alexander Lind wrote:
> I would put the pdf as a regular file on the hd, and store the path to
> it in the db.
> Meta data could be things like the size of the pdf, author, owner,
> number of pages etc.
>
> Storing binary data from pdf:s or images or any other comm
Ed wrote:
> On Wednesday 07 March 2007 19:28, Jay Pipes wrote:
>> Is there a specific reason you want to store this in a database? Why
>> not use the local (or networked) file system and simply store the
>> metadata about the PDF in the database?
>>
>> Cheers,
>>
>> Jay
>
> Hi Jay,
> Could you ex
i think he means you store only the name of the document
and the directory location of where it is located.
-Original Message-
>From: Ed <[EMAIL PROTECTED]>
>Sent: Mar 7, 2007 3:15 PM
>To: mysql@lists.mysql.com
>Subject: Re: binary into blob
>
>On Wednesday 07 March 2007 19:28, Jay Pipe
I would put the pdf as a regular file on the hd, and store the path to
it in the db.
Meta data could be things like the size of the pdf, author, owner,
number of pages etc.
Storing binary data from pdf:s or images or any other common binary
format is generally not a good idea.
Alec
Ed wrote
On Wednesday 07 March 2007 19:28, Jay Pipes wrote:
> Is there a specific reason you want to store this in a database? Why
> not use the local (or networked) file system and simply store the
> metadata about the PDF in the database?
>
> Cheers,
>
> Jay
Hi Jay,
Could you explain what you mean by me
On Wednesday 07 March 2007 19:28, Jay Pipes wrote:
> Ed wrote:
> > I guess a pdf file is a binnary file and it will contain characters that
> > will mess things up, so my question is:
> >
Hi, sorry for the late answer. The reason, until I come up with a better one,
is that I'm doing my own basic
Ed wrote:
Hi All,
I'm trying to figure out how to put a pdf file into a blob field.
I guess a pdf file is a binnary file and it will contain characters that will
mess things up, so my question is:
can it be done? Or better, how can it be done? ;)
Any pointers to documentation are a bonus!
Hi All,
I'm trying to figure out how to put a pdf file into a blob field.
I guess a pdf file is a binnary file and it will contain characters that will
mess things up, so my question is:
can it be done? Or better, how can it be done? ;)
Any pointers to documentation are a bonus!
Thanks all,
Hi all,
Is it possible to mirror a db to a different name? I have multiple dbs
with a proprietary software which all use the same db name. I want to
mirror all of them to a single db for backup purposes, but I don't want
to run multiple instances of slave mysqls.
Example:
db1.domain.com -
I have few information, but i suppose that you are on performance border
of your db server. So you haven't reserve for doing backup.
Send some few rows of command vmstat 1, before backup process and
through backup process.
How are these numbers:
- queries per second ?
- updates / selects rate
On 7 Mar 2007, at 09:30, Ian P. Christian wrote:
--single-transaction
Creates a consistent snapshot by dumping all tables in a
single transaction. Works ONLY for tables stored in
storage engines which support multiversioning (currently
only InnoDB does); the dump is NOT guaranteed to be
consist
Filip Krejci wrote:
Hi,
you are right, option --single-transaction does not accquire any lock on
your innodb tables. Backup is fully on-line due to mvcc.
You should look for another reason of this behavior.
1/ What says 'show full processlist' when backup is running
It shows mostly inserts
Hi,
you are right, option --single-transaction does not accquire any lock on
your innodb tables. Backup is fully on-line due to mvcc.
You should look for another reason of this behavior.
1/ What says 'show full processlist' when backup is running
2/ What says 'show engine innodb\G' when backu
Hi Xian,
xian liu wrote:
> ERROR 1295 (HY000): This command is not supported in the prepared statement
> protocol yet
> mysql> drop procedure ct_tb//
> Query OK, 0 rows affected (0.00 sec)
>
> the same, "drop function/trigger xxx" is also not supported in prepare
> statment.
>
> Is it a
On 7 Mar 2007, at 06:39, Cabbar Duzayak wrote:
I am particularly interested in master-to-master replication (not even
sure if this is possible with mysql) and/or real-world usage
scenarios/examples as to how much load it can handle, how reliable it
is, etc?
I've had some success with it.
I fo
Marcus Bointon wrote:
Hi Marcus :)
> On 7 Mar 2007, at 08:44, Ian P. Christian wrote:
>
> --single-transaction doesn't _do_ the dump as a transaction, it simply
> wraps the dump in begin/commit statements so it's atomic when restoring.
>
> If the dump is to preserve relational integrity then it
Hi ,
--single-transaction will execute the same nature of mysqldump command
with begin and end transaction. How ever the table is locked for the
backup your site may be slow.
--
Praj
Ian P. Christian wrote:
Recently my one and only slave went down, and stupidly I don't have a
dump suitable
On 7 Mar 2007, at 08:44, Ian P. Christian wrote:
mysqldump --master-data --single-transaction database > dump.sql
This database I'm dumping has something like 17 million rows, all
but 1 table (which uses FULLTEXT, and only has 3-4k rows) run
innodb. There is only one table of any real size,
Hi
Multi-master replication is safely possible with MySQL 5.0 when they
introduced auto_increment_increment and auto_increment_offset
variables. Before this it was possible to run into problems with auto
increment columns generating non-unique numbers between servers. Try
the following link
Recently my one and only slave went down, and stupidly I don't have a
dump suitable for reseeding (is that's the right term...) the slave, so
need to make a snapshot of the master database again. This time I'll
make sure I keep this datafile for future restores should I need to -
you live and l
36 matches
Mail list logo