Am 05.10.2014 um 22:39 schrieb Jan Steinman:
I've had good experiences moving MyISAM files that way, but bad experience
moving INNODB files. I suspect the latter are more aggressively cached
simply no, no and no again
independent of "innodb_file_per_table = 1" there is *always* a global
tabl
* Reindl Harald [141005 13:12]:
>
> Am 05.10.2014 um 21:29 schrieb Tim Johnson:
> >I have a dual-boot OS X/Ubuntu 12.04 arrangement on a mac mini. The
> >ubuntu system has failed and I am unable to boot it.
> >
> >I have one database on the ubuntu partition that was not backed up.
> >
> >I am abl
* Jan Steinman [141005 13:12]:
> > So, this is a "Help me before I hurt myself" sort of question: Are
> > there any caveats and gotchas to consider?
> Do you know if the database was shut down properly? Or did Ubunto
> crash and die and your partition become unbootable while the
> database was i
Am 05.10.2014 um 21:29 schrieb Tim Johnson:
I have a dual-boot OS X/Ubuntu 12.04 arrangement on a mac mini. The
ubuntu system has failed and I am unable to boot it.
I have one database on the ubuntu partition that was not backed up.
I am able to mount the ubuntu partion with fuse-ext2 from Mac
> So, this is a "Help me before I hurt myself" sort of question: Are
> there any caveats and gotchas to consider?
Do you know if the database was shut down properly? Or did Ubunto crash and die
and your partition become unbootable while the database was in active use?
Either way, you need to mak
I have a dual-boot OS X/Ubuntu 12.04 arrangement on a mac mini. The
ubuntu system has failed and I am unable to boot it.
I have one database on the ubuntu partition that was not backed up.
I am able to mount the ubuntu partion with fuse-ext2 from Mac OS X,
thus I can read and copy the mysql dat
Hello,
Well, you have just invented what is known as index organized tables. The
MyISAM engine does not implement those.
If it did, it would have to deal with quite a few circumstances unique to IOTs.
One such circumstance is degradation
of efficiency with the increase of record length,
MyISAM can't do this but innodb can. If you change to an innodb table
and define your index as the primary key then row data is clustered
with the primary key. This means there is no additional storage
overhead for the primary key because it is just the row data. This
will break down if you define
Am 24.11.2012 22:02, schrieb Hank:
> Hello everyone,
>
> I know this is a longshot, but is there any way to eliminate the MYD
> file for a table that has a full covering index? The index is larger
> than the datafile, since it contains all the records in the datafile,
> plus a second reverse in
- Original Message -
> From: "Yu Watanabe"
>
> So, which memory corresponds to 'pages' for the MyISAM then?
> It would be helpful if you can help me with this.
None, as Reindl said. This is not a memory issue, it's a function of I/O
optimization. Records are stored in pages, and pages a
t;> Am 24.11.2011 10:25, schrieb Yu Watanabe:
>>> Hello Johan.
>>>
>>> Thank you for the reply.
>>> I see. So it will depend on the key buffer size.
>>>
>>> Thanks,
>>> Yu
>>>
>>> Johan De Meersman さんは書きました:
>>>> ---
De Meersman さんは書きました:
>>> ----- Original Message -
>>>> From: "Yu Watanabe"
>>>>
>>>> It seems that MYD is the data file but this file size seems to be not
>>>> increasing after the insert sql.
>>>
>>> That'
buffer size.
>
> Thanks,
> Yu
>
> Johan De Meersman さんは書きました:
>> - Original Message -
>>> From: "Yu Watanabe"
>>>
>>> It seems that MYD is the data file but this file size seems to be not
>>> increasing after the insert sql.
Hello Johan.
Thank you for the reply.
I see. So it will depend on the key buffer size.
Thanks,
Yu
Johan De Meersman さんは書きました:
>- Original Message -
>> From: "Yu Watanabe"
>>
>> It seems that MYD is the data file but this file size seems to be not
>
> - Original Message -
> > From: "Yu Watanabe"
> >
> > It seems that MYD is the data file but this file size seems to be not
> > increasing after the insert sql.
>
> That's right, it's an L-space based engine; all the data that has,
From: "Yu Watanabe"
> >
> > It seems that MYD is the data file but this file size seems to be not
> > increasing after the insert sql.
>
> That's right, it's an L-space based engine; all the data that has, is and
> will ever be created is already in th
- Original Message -
> From: "Yu Watanabe"
>
> It seems that MYD is the data file but this file size seems to be not
> increasing after the insert sql.
That's right, it's an L-space based engine; all the data that has, is and will
ever be created is al
Hi !
I would like to ask question regarding to the MyISAM engine.
Is there any physical file that you have to be aware of its size
for disk sizing, like the ibdata1 in innodb storage engine?
It seems that MYD is the data file but this file size seems to be not
increasing after the insert sql
Le Tue, 23 Dec 2008 15:33:34 +, KLEIN Stéphane a écrit :
> Le Tue, 23 Dec 2008 14:42:40 +, KLEIN Stéphane a écrit :
>
>> Hi,
>>
>> This my script to convert latin1 database to utf8 :
>>
>> $ mysqldump --user=root --password=password --host=mybox mydatabase --
>> default-character-set=la
Le Tue, 23 Dec 2008 14:42:40 +, KLEIN Stéphane a écrit :
> Hi,
>
> This my script to convert latin1 database to utf8 :
>
> $ mysqldump --user=root --password=password --host=mybox mydatabase --
> default-character-set=latin1 > mydatabase.latin1.sql$ mysqldump --
> user=root --password=passwo
I've break line misteak in my previous message, this is the fix :
$ mysqldump --user=root --password=password --host=mybox mydatabase --
default-character-set=latin1 > mydatabase.latin1.sql$ mysqldump --
user=root --password=password --host=mybox mydatabase
--default-character- set=latin1 > mydata
Hi,
This my script to convert latin1 database to utf8 :
$ mysqldump --user=root --password=password --host=mybox mydatabase --
default-character-set=latin1 > mydatabase.latin1.sql$ mysqldump --
user=root --password=password --host=mybox mydatabase --default-character-
set=latin1 > mydatabase.lati
can u please show use the content of the test.csv file. Also is "comapny
name" a single column or two different columns
If its two different columns than try this
load data file '/foo/test.csv' into table abc.test fields terminated by ','
(company,name)";
Hi..
I've got an issue with doing a Load data file" cmd..
my test text tbl has a column named "company name" i'm trying to figure out
how to use the load data file cmd, to be able to extract the "company name"
col...
when i do:
load data file '/foo/tes
a sybase database.
> I need to bcp in the mysql data file into sybase. Can anyone help me on
> how to go about the same ?
>
> Thanks in advance,
> Abdul.
>
>
> This email has be
Hello,
I have a mysql data bcp file ( using select into outfile ) from a table
in mysql database. I have a similar table existing in a sybase database.
I need to bcp in the mysql data file into sybase. Can anyone help me on
how to go about the same ?
Thanks in advance,
Abdul
Mohammed,
http://dev.mysql.com/doc/refman/5.0/en/adding-and-removing.html
"
If your last data file was defined with the keyword autoextend, the
procedure for reconfiguring the tablespace must take into account the size
to which the last data file has grown. Obtain the size of the data
when i do a ls -ltr ibdata1 , i get the size to be 463470592 bytes
2. I edited my /etc/my.cnf to add the following:
innodb_data_file_path = /mysql-system/mysql/data/ibdata1:443M;/mysql-
system2/ibdata2:50M:autoextend
i got the following error:
060330 01:48:42 mysqld started
InnoDB: Error: data
>Now, I have another problem:
>ERROR 13 (HY000): Can't get stat of
'/users/lolajl/documents/development/knitlib/datafiles/standardwttype.txt'
>(Errcode: 13)
That's permission. is the dir 755?
PB
Lola J. Lee Beno wrote:
Peter Brawley wrote:
>ERROR 1064 (42000): You have an error in your S
Lola J. Lee Beno wrote:
ERROR 13 (HY000): Can't get stat of
'/users/lolajl/documents/development/knitlib/datafiles/standardwttype.txt'
(Errcode: 13)
Never mind . . . I figured that I needed to add LOCAL to the query.
Should have gone back to the manual page for LOAD DATA.
--
Lola - mai
Peter Brawley wrote:
>ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your >MySQL server version for the right
syntax to use near 'standardwttype.txt`
Use single quotes not (dreaded) backticks.
This seems to have fixed one problem. Now, I ha
Lola
>I keep getting an error message:
>ERROR 1064 (42000): You have an error in your SQL syntax; check the
manual that corresponds to your >MySQL server version for the right
syntax to use near 'standardwttype.txt`
Use single quotes not (dreaded) backticks.
PB
-
Lola J. Lee Beno wrot
I'm trying to import a set of data into a database (MySQL 5.0.17-max).
Here's the query that I tried to run:
LOAD DATA INFILE `standardwttype.txt`
INTO TABLE StandardWeightType
FIELDS TERMINATED BY `\t`
LINES TERMINATED BY `\r`
(standard_wt_type_id, standard_wt_desc, standard_wt_lud);
And here'
r: tablespace size stored in header is 877184 pages, but
InnoDB: the sum of data file sizes is 953856 pages
And Mr. Heikki tell me to do these steps:
(953856 - 877184) / 64 = 1198 MB
1) Stop the mysqld server.
2) Add a new 1198M ibdata file at the end of innodb_data_file_path.
3) When you st
|Dear All,
As subject, Actually i've been ever meet this case
when i see :
InnoDB: Error: tablespace size stored in header is 877184 pages, but
InnoDB: the sum of data file sizes is 953856 pages
And Mr. Heikki tell me to do these steps:
(953856 - 877184) / 64 = 1198 MB
1) Stop the m
> I am moving a table from a 3.23.56 db to a 4.1.7 db. I currently only
> testing to see if I can. So far, I have been able to create the receiving
> table, but not be able to insert the data. (The only difference I see when
> using phpMyAdmin is the collation column on the 4.1.7 server)
>
> Whe
I am moving a table from a 3.23.56 db to a 4.1.7 db. I currently only testing
to see if I can. So far, I have been able to create the receiving table, but
not be able to insert the data. (The only difference I see when using
phpMyAdmin is the collation column on the 4.1.7 server)
When I try
] Asked for 1048576 thread stack, but got 126976
InnoDB: Error: auto-extending data file /data/mysql_4.1_ibdata/ibdata1 is of
a
different size
InnoDB: 779008 pages (rounded down to MB) than specified in the .cnf file:
InnoDB: initial 32000 pages, max 128000 (relevant if non-zero) pages!
InnoDB
Pattishall http://www.friendster.com
> -Original Message-
> From: Hristo Chernev [mailto:[EMAIL PROTECTED]
> Sent: Thursday, November 18, 2004 9:14 AM
> To: [EMAIL PROTECTED]
> Subject: innodb data file grew beyond the specified max size in config
>
> Hi all
>
>
1048576 thread stack, but got 126976
InnoDB: Error: auto-extending data file /data/mysql_4.1_ibdata/ibdata1 is of a
different size
InnoDB: 779008 pages (rounded down to MB) than specified in the .cnf file:
InnoDB: initial 32000 pages, max 128000 (relevant if non-zero) pages!
InnoDB: Could not open or
This example is from the manual:
innodb_data_file_path=ibdata1:10M:autoextend:max:500M
My question is, what happens when ibdata1 extends and
hits 500M? If that is the only data file configured, will
MySQL crash ?
thanks,
Mayuran
--
MySQL General Mailing List
For list archives: http
an change size of InnoDB data and log files in my.cnf file.
But be careful. Use correct way.
http://www.innodb.com/ibman.php#Adding_and_removing
Mikhail.
- Original Message -
From: "Asif Iqbal" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, January 23, 2004
On Fri, 23 Jan 2004, Mikhail Entaltsev wrote:
> Date: Fri, 23 Jan 2004 09:34:52 +0100
> From: Mikhail Entaltsev <[EMAIL PROTECTED]>
> To: Gregory Newby <[EMAIL PROTECTED]>, Asif Iqbal <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED]
> Subject: Re: data file too big
>
logs:
-bin.001
...
-bin.00N
Best regards,
Mikhail.
- Original Message -
From: "Gregory Newby" <[EMAIL PROTECTED]>
To: "Asif Iqbal" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, January 22, 2004 11:32 PM
Subject: Re: data file too big
>
I believe that this will flush those logs:
mysql> reset master;
-- Greg
On Thu, Jan 22, 2004 at 05:23:07PM -0500, Asif Iqbal wrote:
> Hi All
>
> My data file has all these files
>
> (root)@webrt:/usr/local/mysql/data# du -sh *
> 25K ib_arch_
Hi All
My data file has all these files
(root)@webrt:/usr/local/mysql/data# du -sh *
25K ib_arch_log_00
3.0Kib_arch_log_02
3.0Kib_arch_log_04
101Mib_logfile0
101Mib_logfile1
1.9Gibdata1
1.5Gibdata2
2.0Kmy.cnf
70K mysql
2.0Knewdb
39M
My ibdata1 file is growing. Do I need to worry about that ? I am guessing it
will automatically delete old data to fit the size, correct ? Also how do I
limit the growth size. I tried to put max:2000M , but since my Innodb is
crashing and can't restart mysql I removed the whole my.cnf file all toge
ne.
Another thing I noticed that it can't create the pid file. However it has no
problem creating the err file. Would you know why is it so ?
> case something goes wrong.
>
> Regards,
>
> Heikki
>
> .
>
> Subject: Re: innodb data file of di
just in case)
>(use mysqldump)
done
> - Perform a complete dump of your InnoDB tables
>(use mysqldump)
how do I do this ? this is where I am stuck
> - Remove your InnoDB tables
> - Shut down the server
> - Remove the default InnoDB data file and log files (these will
>
Asif,
since you did not have any my.cnf, ibdata1 is an auto-extending data file
(initially 10 MB) in the datadir of MySQL. And there are two 5 MB
ib_logfiles in the datadir.
"
If you specify the last data file with the autoextend option, InnoDB will
extend the last data file if it runs o
At 22:08 -0400 7/29/03, Asif Iqbal wrote:
I just decided to use my.cnf and bumped into this error message
030729 22:04:22 mysqld started
InnoDB: Error: data file /usr/local/mysql/data/ibdata1 is of a different size
InnoDB: 81024 pages (rounded down to MB)
InnoDB: than specified in the .cnf file
I just decided to use my.cnf and bumped into this error message
030729 22:04:22 mysqld started
InnoDB: Error: data file /usr/local/mysql/data/ibdata1 is of a different size
InnoDB: 81024 pages (rounded down to MB)
InnoDB: than specified in the .cnf file 16384 pages!
InnoDB: Could not open data
Yes there is, that's what Paul was referring to regarding InnoDB...it has a
table space made up of multiple files on the disk and the tables reside
within the tablespace. Thus the tables are not bound by the file system's
maximum file size.
Details are in the MySQL manual in the table types secti
Hi,
Thanks for the fast response. I wonder there is a
'tablespace' notion in mysql just like the one in
Oracle. One can keep adding datafile from different
disk to the same tablespace and don't have to worry
about how the data is stored in the files.
Thanks again.
--- Paul DuBois <[EMAIL PROTE
Hi,
Thanks for the fast response. I wonder there is a
'tablespace' notion in mysql just like the one in
Oracle. One can keep adding datafile from different
disk to the same tablespace and don't have to worry
about how the data is stored in the files.
Thanks again.
--- Paul DuBois <[EMAI
At 18:05 -0500 6/6/03, Paul DuBois wrote:
At 15:59 -0700 6/6/03, Titu Kim wrote:
2. How can i add another file to a table if the .MYD
file grows too large?
Once the file size reaches its maximum, that's as far as you can go.
I should add to this that one way to obtain an effective larger
"file" siz
At 15:59 -0700 6/6/03, Titu Kim wrote:
Hi I am new to Mysql. I have the following newbie
questions.
1. How can i find/set the file size of for my .MYI and
.MYD file?
You can find the sizes using ls on Unix or dir on Windows.
You don't set the sizes. Let the server manage the files.
2. How can i ad
Hi I am new to Mysql. I have the following newbie
questions.
1. How can i find/set the file size of for my .MYI and
.MYD file?
2. How can i add another file to a table if the .MYD
file grows too large?
3. How to configure mysql client to access two mysql
database on two machines with each datab
At 19:58 +0300 3/14/03, [EMAIL PROTECTED] wrote:
I have strange problem with LOAD DATA INFILE
$file="base.txt";
$conn=mysql_connect('localhost', 'user', 'paswd');
mysql_select_db('realestate', $conn);
$query="LOAD DATA INFILE '".$file."' REPLACE INTO TABLE
table_name FIELDS TERMINATED BY '\t' LINE
I have strange problem with LOAD DATA INFILE
$file="base.txt";
$conn=mysql_connect('localhost', 'user', 'paswd');
mysql_select_db('realestate', $conn);
$query="LOAD DATA INFILE '".$file."' REPLACE INTO TABLE
table_name FIELDS TERMINATED BY '\t' LINES TERMINATED
BY '\r\n'";
mysql_query($query,
I'm waiting for a unix pipe to be populated when I get the following
message from MySQL:
ERROR 2013 at line 1 in file: 'script_namel': Lost connection to MySQL server during
query
The script in question has one statement in it:
load data local infile 'pipe_name'..
This only happens if the
At 9:39 -0600 11/8/02, Ray wrote:
is there a way to have mysql skip columns from the data file when loading the
data?
No. You'll be better off preprocessing the file first (or use your
temp file approach; that'll work, too).
i am getting a text delimited data file from an outs
is there a way to have mysql skip columns from the data file when loading the
data?
i am getting a text delimited data file from an outside source that is
updated daily. however the data is defined as having 10 blank fields in it,
and in the first 9000 records 7 more fields are just not used
Hi Scott,
On Thu, 2002-10-10 at 03:28, Scott Pippin wrote:
> I am trying to set up two data files in case the first one fills up. I
> tried to use the following in my.cnf but it says there is an error. If
> I take out the reference to the second data file everything works
>
Scott Pippin wrote:
> I am trying to set up two data files in case the first one fills up. I
> tried to use the following in my.cnf but it says there is an error. If
> I take out the reference to the second data file everything works
>
> AIX 4.3.3
> MySQL 4.0.4
>
>
I am trying to set up two data files in case the first one fills up. I
tried to use the following in my.cnf but it says there is an error. If
I take out the reference to the second data file everything works
AIX 4.3.3
MySQL 4.0.4
innodb_data_file_path=libdata1:100M:autoextend:max:2000M
table where one field is TEXT type. I put a few records into
the table (never more than 3 records!), then ran a check on it, and it
was fine. Ran a repair, that was fine. Ran an "extended" repair, and
got a few "info" messages such as "Found block that points outside data
f
Hi,
I have a table that has now reached 4gb but I thought that shouldn't be a
problem since the table is running with raid. I noticed that the Datafile
length has reached the Max Datafile length and I tried to increase it with
myisamchk -r --data-file-length=8589934588 with no success. Do I
e, 04 Dec 2001 14:14:06 +0800
To: [EMAIL PROTECTED]
Subject: Loading data file with MacRoman Character Encoding
Hi All,
I am running Mysql version 2.23.42 on Mac OS X version 10.1.
I need to upload few files in Mysql which are in MacRoman Text Encoding. My
default installation does not
Hi All,
I am running Mysql version 2.23.42 on Mac OS X version 10.1.
I need to upload few files in Mysql which are in MacRoman Text Encoding. My
default installation does not support the MacRoman Encoding.
Currently I need to convert the file into ISO-8859-1 encoding before I can
run the LOAD D
, one way to do this
is to use SELECT ... INTO OUTFILE into another file and compare this to your
original input file."
> -Original Message-
> From: Calvin Chin [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, October 28, 2001 9:08 PM
> To: [EMAIL PROTECTED]
> Subject: Help O
Hi list member,
I have a slight problem here. I am testing on the data convertion from
text file into MySQL table.
I am able to use the 'load data infile' command and insert the data into
the table, however, with 1000 warnings. I don't know where I can see the
warning messages ?
Can you peop
I have a query I'm sending to the DB...however, instead of having the server
send me the data back, I want to put it into a CSV file for use in an Excel
Spreadsheet. This means either tab-delimited, or separated by commas. Can
someone help?
Thanks,
--
Hi,
I've a table which is read very often..
Simplified table structure:
create table comments (
id int unsigned not null auto_increment primary key,
datedate not null,
comment varchar(255) not null
Hi,
I've a table which is read very often..
Simplified table structure:
create table comments (
id int unsigned not null auto_increment primary key,
datedate not null,
comment varchar(255) not null
75 matches
Mail list logo