Hello Friends,
I wrote one simple pgm to connect and disconnect the database in TC(Windows).
when compiling the program, it's giving the error, unable to open the
my_global.h and giving some more errors from the header files my_sys.h and
mysql.h. I copied all the header files from the mysql\in
Hi,
if the first characters are numerics, no need to use regexp, since mysql does
implicit conversion if you do calculations :
mysql> select '10.95 tiitti' from dual;
+--+
| 10.95 tiitti |
+--+
| 10.95 tiitti |
+--+
1 row in set (0.09 sec)
mysql> select '10.95
look also using READ UNCOMMITTED
http://dev.mysql.com/doc/mysql/en/innodb-transaction-isolation.html
Mathias
Selon [EMAIL PROTECTED]:
> Hi,
> you're ooking for the opposite of what can be done. One can select in share
> mode
> or for update :
> http://dev.mysql.com/doc/mysql/en/innodb-locking-
Hi,
you're ooking for the opposite of what can be done. One can select in share mode
or for update :
http://dev.mysql.com/doc/mysql/en/innodb-locking-reads.html
this prevents data from being incoherent. If you want skip waiting for locks,
you can make for each user a temp table containing the resu
Hi All,
Is there a way by which I can tell the Mysql to ignore the rows that are
locked by someone else and take the next available record. The problem is,
I have a Query like this:
Select * from Table1 where Fld1=2 FOR UPDATE Limit 1
I will have multiple clients running this same query with t
Start reading here: http://dev.mysql.com/doc/mysql/en/c.html
On 5/12/05, Ashok Kumar <[EMAIL PROTECTED]> wrote:
> Hi friends,
>I'm ashok, new member to this forum. I'm doing my final year graduation
> and I'm new to this MySQL, C and Windows Combination(i never worked DB
> connectivity in C)
Hi friends,
I'm ashok, new member to this forum. I'm doing my final year graduation and
I'm new to this MySQL, C and Windows Combination(i never worked DB connectivity
in C). There is no such header files in TC such as mysql.h and so on. how can i
include those files and how can i create a d
Hi:
i was wondering if there's any way to limit the bandwidth used by
mysqldump to dump data from remote hosts. since i couldn't find any
documentation on this, i assume that mysqldump will use all the
available bandwidth of the network.
the issue is that i'm looking to fetch data to the tune o
On Thu, May 12, 2005 at 10:08:33PM +0300, Gleb Paharenko wrote:
> Hello.
>
> > Is this a known issue?
>
> It is interesting for me. According to the:
>
> http://dev.mysql.com/doc/mysql/en/charset-metadata.html
>
> MySQL stores usernames in utf8. Yes, you should convert your
> tables to utf8,
Hello.
> Linux b5 2.6.11 #1 SMP Sat Apr 16 23:35:33 MDT 2005 i686 unknown
Is it a 32-bit arch?
> sort_buffer_size)*max_connections =3D 3659174 K
> bytes of memory
Usually the process size on 32-bit machines can't be more than 2G.
"Mark C. Stafford" <[EMAIL PROTECTED]> wrote:
> He
Hello.
> Is this a known issue?
It is interesting for me. According to the:
http://dev.mysql.com/doc/mysql/en/charset-metadata.html
MySQL stores usernames in utf8. Yes, you should convert your
tables to utf8, however, in my opinion, you don't have to do
this with 'mysql' database. C
We have been using the controllers built into the
motherboards. I know they are not as good as some
dedicated cards but they work well enough for us.
I prefer the nVidia nForce4 Ultra Chipsets. They
have a nice raid setup. We needed a cheap box for
data server but with a lot of tempory disk sp
> Here is a summary of how I have merged hierarchical data structures in the
> past. I start by adding a column or two to my destination data tables for
> each table in the tree I need to reconstruct. The first new column (I
> usually call it something like "old_ID") holds the original PK of the
From the manual:
http://dev.mysql.com/doc/mysql/en/group-by-functions.html
"If you use a group function in a statement containing no
GROUP BY clause, it is equivalent to grouping on all rows."
Just run the same query without the GROUP BY to get the
total:
SELECT
DATE_FORMAT(date, '%Y-%m-%d') AS dat
ROLLUP is not working with current version.IS any way i can do?
I have 1 column names time with data type datetime.I want to get report like
-mm-dd format.
Can I get by using date_format(nameofcolumn,'%Y-%m-%d')?
Harald Fuchs <[EMAIL PROTECTED]> wrote:
In article <[EMAIL PROTECTED]>,
Seena
Hi,
I wanted to clean up some numeric currency data fields which had some
non-numeric values which took
the first two characters of the field (they were some kind of garbage
characters) anyway the
following did the trick
update tbl_products set p10_price=mid(p10_price,2) where p10_price regexp
I changed the default character set on a 4.1 server to utf8.
As expected, this caused the lengths of character fields to be
shortened, requiring alter table to be run on them to extend the
lengths.
But I didn't expect that this would also shorten the mysql system
tables (the "mysql" db), so that
"Neculai Macarie" <[EMAIL PROTECTED]> wrote on 05/12/2005 03:26:33 AM:
> > >> Not that I'm aware of. What type of conversions are you doing that
you
> > >> need 30,000 use vars? An easy solution would be to try it and find
out
> :)
> > >
> > > I need to move multiple linked entries (in around 12
Hello everyone. I've got a MySQL server that behaves
wonderfully...most of the time. It has crashed three times this week
and I'm at a loss for why.
Below I've included all the following:
error.log
os info
memory check
results from resolve_stack_dump
my.cnf
Does the information point to
Larry wrote:
My $.02. As I agree SCSI has had a reputation for being
a more solid enterprise type drive, everyone's mileage varies.
We have moved to using all SATA drives in our newer servers. I
have to admit most of our databases are smaller than what many
on this list have. All our db's a
Brent,
I'd disagree with your felling that today's disk drives are more reliable
than dive years ago.
I used to think of disk failures as a rare event, but now that they are
producing such high capacity parts for next to nothing, I think quality has
suffered.
I've heard of a lot more people suff
Hi , I don't know if this is possible with Sql but I'm trying to set the
row number into a field for each row.
The complexity comes when I try to do that according to some grouping
rules.
I think I'll made myself more clear with a simple example:
This is the table I have
Column Id is primary ke
I'd be curious what you tested. Did the SATA drives support tagged
command queueing (TCQ)? That can make a huge difference in a multi-user
environment, detrimental in a single user. How many drives were in the
SATA array and how many were in the SCSI array? You could probably put
2-3x the numbe
Newer SATA drives are supporting command queueing, which should really
help their performance. I think when SATA-2 becomes more available,
SATA will start being a more viable choice and start rivaling SCSI
performance.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mys
My $.02.As I agree SCSI has had a reputation for being
a more solid enterprise type drive, everyone's mileage varies.
We have moved to using all SATA drives in our newer
servers. I have to admit most of our databases are smaller
than what many on this list have. All our db's are under
500 me
It sounds like you should be doing the link preservation and number
update part in php or perl.
Neculai Macarie wrote:
Not that I'm aware of. What type of conversions are you doing that you
need 30,000 use vars? An easy solution would be to try it and find out
:)
I need to move multi
On Wed, May 11, 2005 at 12:29:47PM -0700, Kevin Burton wrote:
> Were kicking around using SATA drives in software RAID0 config.
> The price diff is significant. You can also get SATA drives in 10k RPM
> form now.,
Good idea, but a few points :
- 10krpm disks will run hotter than 7200rpm d
"Scott M. Grim" <[EMAIL PROTECTED]> wrote on 12/05/2005 16:42:00:
> I've fairly extensively (although not necessarily scientifically) tested
> SATA 150 vs. SCSI U320 and find that if you're doing a lot of random
reads
> and writes (such as with a database server), SCSI provides nearly 5x the
I've fairly extensively (although not necessarily scientifically) tested
SATA 150 vs. SCSI U320 and find that if you're doing a lot of random reads
and writes (such as with a database server), SCSI provides nearly 5x the
performance as SATA so, for us, it's well worth the additional expense.
It
Donny Simonton wrote:
With Mysql you should ONLY use RAID10. Everything else is not worth your
time.
I would argue that a large stripe (RAID0) would be a better solution for
slaves in a large replicant network. Why waste the drive space and
performance on a RAID10 when you have multiple replic
Prashant,
>i have gone thru all the web but didnt find an example for buffer(g,d)
or Distance() functions..
>please help out in writing these type of mysql queries
According to the manual, Distance(), Related() & the functions listed in
18.5.3.2 including Buffer() aren't yet implemented.
PB
---
If you are trying to set the first 6 characters of your column to '11'
then you can't use SUBSTRING on the LHS, but only from the RHS:
UPDATE CSV_Upload_Data SET PRACT_ASCII =
CONCAT(SUBSTRING(PRACT_ASCII, 1, 15), '11',
SUBSTRING(PRACT_ASCII, 22))
WHERE Insertion_ID = 19071
Hi,
I am getting an error on the following query and but can't understand why,
the syntax looks fine to me!
mysql> UPDATE CSV_Upload_Data SET SUBSTRING(PRACT_ASCII, 16, 6) = '11'
WHERE Insertion_ID = 190716;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that cor
On Wed, 11 May 2005 15:08:50 +0200, wrote:
>Selon Jay Blanchard <[EMAIL PROTECTED]>:
>
>> [snip]
>> I've got a converted from Excel spreadsheet to mysql database, which has
>> mixed case column names and
With advice from this thread, what I ended up doing was
>show create table tbl_products;
T
Hi,
First of all, thanks to everyone that provided pointers on this matter.
The route I chose to take was to make 2 tables. One is for cumulative
network stats; this table can be used for the weekly,monthly,yearly
reports. I also created a table for daily stats which will be dropped
at midnight ea
hi,
i have created / imported a table to mysql thru mysqlgisimport libraries and it
has created a table with the following field / table structure
Field name Type Allow nulls? Key Default value Extras
id int(10) unsigned No Primary NULL auto_increment
NAME varchar(100) No N
Hello.
It depends on the table's type. For MyISAM see:
http://dev.mysql.com/doc/mysql/en/optimize-table.html
For InnoDB:
http://dev.mysql.com/doc/mysql/en/innodb-file-defragmenting.html
Seena Blace <[EMAIL PROTECTED]> wrote:
> [-- text/plain, encoding 7bit, charset: us-ascii, 9 li
Hello.
Older versions of MySQL have very limited support for localization
and character sets. In general you may assign only one global server
character set, and other objects in database don't have their own
character sets.
Mike Blezien <[EMAIL PROTECTED]> wrote:
>
> Unfortunately,
What type of table is this? MyISAM or InnoDB?
What are your system variable settings when you issued the CREATE INDEX
command?
If this is a MyISAM table, then MySQL will spend time re-creating the data
file first before creating ALL of the indexes including the new one.
If you already have indexe
mel list_php wrote:
> Hi guys,
>
> I was trying to download the mysqlxml patch for mysql 5.0 but didn't
> succeed from the url:
> http://d.udm.net/bar/myxml/mysqlxml.tar.g
>
> does anybody know where I could find it?
> Did anybody tried to use it or have any link to a doc/tutorial in
> addition
In article <[EMAIL PROTECTED]>,
Seena Blace <[EMAIL PROTECTED]> writes:
> Hi,
> I want report like this
> date process pending wip
> 5/10/051030 40
> 5/11/05 09 28 60
> ---
Hey, all.
Hardware: Itiaum 2 with 1.3G cpu * 4 and 8G RAM.
OS: Red Hat Enterprise Advanced Server 3.0
Mysql: mysql-standard-4.0.24-gnu-ia64-glibc23
I created a index on a large table with more than 100,000,000 records by the
following command.
mysql> create index scn_ra on twomass_scn (ra);
It
Hi guys,
I was trying to download the mysqlxml patch for mysql 5.0 but didn't succeed
from the url:
http://d.udm.net/bar/myxml/mysqlxml.tar.g
does anybody know where I could find it?
Did anybody tried to use it or have any link to a doc/tutorial in addition
to the presentation of Alexander Bark
> >> Not that I'm aware of. What type of conversions are you doing that you
> >> need 30,000 use vars? An easy solution would be to try it and find out
:)
> >
> > I need to move multiple linked entries (in around 12 tables) from one
> > running server to another. I'm using auto_increment's all over
Seena,
>How to check whae are tables are having how many
>indexes on which columns ?
http://dev.mysql.com/doc/mysql/en/show-index.html
and alternatively if you're using 5.03 or later, http://dev.mysql.com/doc/mysql/en/statistics-table.html
eg an approximate equivalent of SHOW KEYS FROM tbl
45 matches
Mail list logo