Re: SQL_NO_CACHE

2009-03-04 Thread Micah Stevens
Keep in mind the file system caches too, so it might be working, but the
file access is still getting put in memory. You should disable that too
if you really want consistent results.

In Linux you can dump the file system cache between each query, I have
no clue how to do it in windows or other systems, although Unix stuff is
probably similar.

-Micah

On 03/04/2009 07:27 AM, Morten wrote:

 Hi, I was hoping that using SQL_NO_CACHE would help me bypass the
 query cache, but judging from the below it doesn't. What can I do to
 avoid the query cache?

 Thanks.

 Morten


 mysql select count(*) from users where email = 'hello';
 +--+
 | count(*) |
 +--+
 |0 |
 +--+
 1 row in set (7.22 sec)

 mysql select count(*) from users where email = 'hello';
 +--+
 | count(*) |
 +--+
 |0 |
 +--+
 1 row in set (0.45 sec)

 mysql select count(*) from users where email = 'hello';
 +--+
 | count(*) |
 +--+
 |0 |
 +--+
 1 row in set (0.45 sec)

 mysql select SQL_NO_CACHE count(*) from users where email = 'hello';
 +--+
 | count(*) |
 +--+
 |0 |
 +--+
 1 row in set (0.43 sec)




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Help with a query.

2009-01-27 Thread Micah Stevens

Hi,

I'm somewhat stumped by how to set up a single query that does the 
following. Currently I'm accomplishing this through multiple queries and 
some PHP 'glue' logic, but it should be possible in a single query I 
think, and it's bugging me that I can't figure it out. If anyone has any 
ideas, I'd appreciate it. Here's the situation:


CREATE TABLE `a` (
a_id int(11) AUTO_INCREMENT,
b_id int(11),
a_date datetime
);

CREATE TABLE `b` (
b_id int(11) AUTO_INCREMENT,
b_data varchar(128)
);

CREATE TABLE `c` (
a_id int(11) AUTO_INCREMENT,
b_id int(11),
c_date datetime
);

each item in `a` has a 1 to 1 relationship to `b`, and each item in `c` 
has a 1 to 1 relationship with `b`. Sometimes these correspond, i.e. 
there's a row in `b` that relates to both `a` and `c`, but not always.


What I'm doing is looking for a sum of data from a and c for a 
particular date range that shows it's corresponding b.b_data row. Each 
row should contain:


count(a_id), b_id, b_data, count(c_id)

and if there's no corresponding data, the columns should be null. Like a 
three way left join to table b sort of, include all rows of b, and if 
there's no corresponding data for a or c, just fill the columns with nulls.


I can get queries to execute like this:

select count(a_id), b_id, b_data, count(c_id)
from a
right join b using (b_id)
left join c using (b_id)
group by b_id

but it's different results than when I do two inner joins (one for ab 
and one for bc) and combine the data.


Any thoughts?

Thank you!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Help with a query.

2009-01-27 Thread Micah Stevens

Peter Brawley wrote:

Micah,

each item in `a` has a 1 to 1 relationship to `b`,
and each item in `c` has a 1 to 1 relationship with `b`.
Sometimes these correspond, i.e. there's a row in `b`
that relates to both `a` and `c`, but not always.

So in a given b row, the b_id value might match an a.a_id, a c.a_id, 
or both? Whatever the purpose of this ambiguity, it seems to undermine 
your query objective.


PB

-

This is for a click through system, in this example, 'a' would record 
click throughs, and 'c' would record follow ups. B is the lead 
information. Sometimes there's no click through to a lead contact, and 
sometimes there's a click through with no follow up. Sometimes both.


Does that help?

-Micah

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Changing port no of the server using command line method

2008-12-30 Thread Micah Stevens
On 12/30/2008 11:57 AM, Manish Sinha wrote:
 lists-mysql wrote:
 in a *nix environment, restarting the mysql server is done with a
 system-level command and requires *system* root privileges, not
 something that the average db-admin is likely to have. also,
 changing the port a service is listening on has potentially serious
 implications as you have to understand what (other) services are
 assigned to, and potentially listening on, which ports, again,
 generally beyond the knowledge level of the average db-admin.

 what are you trying to accomplish with this capability? except for a
 few very limited reasons (e.g., running multiple mysql instances)
 there really isn't a valid reason to change the port a mysql
 instance is listening on in the first place.
   
 I can understand the point. I have used MySQL for two years and can
 understand mostly all of the options in the config file and its the
 only DB software I have put all my effort into. I have a good and
 sound understanding of MySQL.
 I can understand that management requires root priviliges, so my app
 would run as root itself. I even understand that its not required, but
 this is the only thing which I was unable to achieve via my app since
 I don't want to read and change the my.cnf file itself. Any mistake
 could cost more than expected. Additionally parsing that file to get
 the port no can be problematic since there are two instances of
 port= 3306
 one of client and other for the server. By mistake I can change the
 wrong one if not taken care of.
 I can understand the security issues but I am actually trying to make
 an app which can administrate MySQL and would later add a GUI over it.

If you want to control the server process, you'll need to start the
server process with those options, this mysqld, and the command line
options are here:
http://dev.mysql.com/doc/refman/5.1/en/server-options.html

For your purposes though, I would absolutely parse and use the config
file instead though because operating directly on the server daemon will
likely come into conflict with a lot of distribution specific shell
scripts (/etc/init.d/mysql for example) - and any configuration that is
done will go away on next restart of the server.

The MySQL config file is very standardized and would not be hard to deal
with.

Please don't take this the wrong way, I hate to discourage anyone, but
if you're worried about dealing successfully with the config file, I'd
practice up on my text processing before jumping into server admin
items. I only say this because in the long run I really believe it will
be much easier for you, and a better strategy based on what little I
understand of your goals. Don't know what language you're using, but
there's lots of libs available for perl and python to do this easily.

I hopt this helps.

-Micah

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Intersect question

2008-12-01 Thread Micah Stevens
On 12/01/2008 08:30 AM, Andrej Kastrin wrote:
 I have the table 'test' which includes two columns: 'study' and 'symbol':

 study symbol
 a2008 A
 a2008 B
 a2008 C
 a2008 D
 b2005 A
 b2005 B
 b2005 E


 The task is to perform an intersection on 'name' column according to
 all distinct values in 'study' column. During the experiments the
 intersection was done 'manually' using the query:

 SELECT a.symbol FROM test as a, test as b WHERE a.symbol=b.symbol and
 a.study=a2008 and b.study=b2005;

 So the result of the query above is (A, B).

 The question is how to implement this query more automatically,
 without directly referencing to the study names, because I want to
 implement it into a php script.

 Thank you in advance for any suggestions.

 Best, Andrej

Why not:

SELECT a.symbol FROM test as a, test as b WHERE a.symbol=b.symbol and
a.study != b.study group by symbol;


-Micah

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Displaying information from table graphically

2008-11-22 Thread Micah Stevens
On 11/21/2008 07:55 AM, David Giragosian wrote:
 On 11/21/08, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
   
 I have a PHP application that accesses data from MySQL. There is table
 called rooms, and table called beds. There is another table called
 patients. Patients are being placed into beds, and beds are in the
 rooms. PHP application currently displays all information in textual mode
 via regular HTML tags. But I would like to have that information displayed
 in graphical mode instead of textual mode.

 Is there a way to display this information from the database graphically.
 Graphic would represent a room, and it would contain beds inside. You
 would be able to see visually which beds are occupied and which are free
 by looking at the graphics.

 User of the system wants pictures instead of text displayed via HTML
 tables as a list of entries.

 Anyone knows anything like this?
 Thanks,
 Dzenan
 


 PHP has the GD library with a fairly extensive set of functions for creating
 images http://us2.php.net/gd.

 You can grab data from MySQL and then use the GD functions to create images
 dynamically. It can be tedious, as you create the image pixel by pixel, but
 the results are very good.

 David

   
Or for reduced CPU overhead, just make some images for beds and rooms
and use tables or positioned DIV tags to place them in the appropriate
place based on the database information.

-Micah


Re: Displaying information from table graphically

2008-11-22 Thread Micah Stevens
On 11/22/2008 04:30 PM, Jujitsu Lizard wrote:
 Summary:  (a) Both approaches are quite good, and (b) the CPU efficiency
 argument for stuffing prepared images into a table or similar may be weak.

 The Lizard

   
By using pre-drawn images and HTML img tag calls you distribute the
processing to the client side, by doing this is certainly reduces
overhead because you're sending text thereby eliminating ANY overhead at
all. I think this almost by definition would suggest a significantly
lower overhead.

Granted in a one shot call, the difference will be minuscule, in a
production web server environment with many client calls, you want to
distribute this as much as possible to reduce server overhead and I
think anyone here that has done GD processing in a production
environment will tell you, using it isn't free by any stretch of the
imagination.

-Micah

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Displaying information from table graphically

2008-11-22 Thread Micah Stevens
On 11/22/2008 07:14 PM, Micah Stevens wrote:
 On 11/22/2008 04:30 PM, Jujitsu Lizard wrote:
   
 Summary:  (a) Both approaches are quite good, and (b) the CPU efficiency
 argument for stuffing prepared images into a table or similar may be weak.

 The Lizard

   
 
 By using pre-drawn images and HTML img tag calls you distribute the
 processing to the client side, by doing this is certainly reduces
 overhead because you're sending text thereby eliminating ANY overhead at
 all. I think this almost by definition would suggest a significantly
 lower overhead.

 Granted in a one shot call, the difference will be minuscule, in a
 production web server environment with many client calls, you want to
 distribute this as much as possible to reduce server overhead and I
 think anyone here that has done GD processing in a production
 environment will tell you, using it isn't free by any stretch of the
 imagination.

 -Micah

   
I failed to mention that by avoiding the use of GD lib calls, it also
simplifies development greatly. A paint program and HTML beats learning
a new library any day. However if this is a learning exercise which it
sounds like it might be, the variables all change.

-Micah


Re: Overhead Issue

2008-11-17 Thread Micah Stevens
Deleted rows.

On 11/17/2008 04:56 PM, sangprabv wrote:
 Hi,
 I just want to know what things that cause table/db overhead? Because I
 have my tables always get overhead problem. And must run OPTIMIZE query
 every morning. Is there any other solution? TIA.


 Willy


   

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Overhead Issue

2008-11-17 Thread Micah Stevens
There is little performance hit due to this. It would only start to
affect performance when the overhead started to increase to the point
that it was a significant percentage of the total table size. Perhaps
someone else can ring in here with real numbers but I'd say it'd have to
be 10-20% of your table size before it would be a problem, and more so
on smaller tables.

It's just stuff that the database engine has to work around. Remember
it's referring to data overhead, not CPU overhead.

If this grows quickly in your system, just automate a nightly or hourly
cleaning.

-Micah

On 11/17/2008 06:35 PM, sangprabv wrote:
 Thanks for the reply, does this overhead reduce performance? And is
 there any tips to avoid this overhead? TIA.


 WM


   

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Overhead Issue

2008-11-17 Thread Micah Stevens
I don't think this is indicative of a design issue. Some tables need
data removed more often than others, however Moon's Father brings up an
excellent point. If you CAN resolve this with a change in design, that
would be the best solution of course.

-Micah

On 11/17/2008 06:50 PM, Moon's Father wrote:
 Maybe your tables were not properly designed.

 On Tue, Nov 18, 2008 at 10:35 AM, sangprabv [EMAIL PROTECTED] wrote:

   
 Thanks for the reply, does this overhead reduce performance? And is
 there any tips to avoid this overhead? TIA.


 WM


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


 


   


Re: Row before and after?

2008-11-12 Thread Micah Stevens
Select the UserId one less, and then ORDER ASC LIMIT 3.

Assuming your UserId's are sequential, it's easy, given userID X

SELECT * FROM Users WHERE UserId = X-1 ORDER BY UserId ASC LIMIT 3;

If they're not sequential due to deletions, etc, it becomes a bigger
problem. You could do a subquery, but that would only be marginally
faster than two queries.

Sorry if I'm not more creative in the morning. :)

-Micah

On 11/12/2008 01:10 AM, Waynn Lue wrote:
 Whoops, just realized I made a mistake in the examples.  What I'm really
 looking for is these two queries:

 SELECT * FROM Users WHERE UserId  *userid*;
 SELECT * FROM Users WHERE UserId  *userid*;

 Waynn

 On Wed, Nov 12, 2008 at 12:14 AM, Waynn Lue [EMAIL PROTECTED] wrote:

   
 I'm trying to find the first row before and the first row after a specific
 row.  Essentially I want to do these two queries, and get each row.

 SELECT * FROM Users WHERE UserId = userId ORDER BY UserId DESC LIMIT 1;
 SELECT * FROM Users WHERE UserId = userId ORDER BY UserId LIMIT 1;

 Is there any way to combine this into one query?  OFFSET doesn't allow a
 negative number, which is essentially what I want.

 Thanks,
 Waynn

 

   


Re: when calling mysql_real_connect from the c api, I get a malloc() memory corruption error.

2008-10-29 Thread Micah Stevens
A working server relies on the MySQL binary, and system libraries, and 
the hardware itself. From your description you don't address the library 
compatibility. Have you ensured that your set of libraries is the same?


-Micah

On 10/29/2008 11:15 AM, Kevin Stevens wrote:

ello,
I am encountering a problem I just can't seem to figure out and I am out of 
ideas. I can compile and run fine on one linux box running Mysql 5.1.23-rc, but 
as soon as I scp the binary and its required libs to another machine which has 
identical hardware and only a slightly upgraded distro (but also running 
5.1.23-rc), I get a glibc malloc(): memory corruption: *** error, which traces 
back to the mysql_real_connect() call. I ran ldd -v on the binary on both 
machines and there are some differences, but they don't look important. I have 
run this binary on other machines before with no issues.

I can connect to the database on the troubled machine both locally from the 
client and through my program from a different machine, but my program craps 
out when I run it locally on this new box. What could be going wrong? The 
database connection is one of the first things the program does- before we do 
any significant allocation of memory, so I really do not believe that this is a 
problem with my program (it has also been continually tested with many 
different data sets).

I checked the bug database and this forum and could not find any relevant 
information, if you have any ideas, please let me know! Below is the output 
from my program, and the code I am using to connect- am I doing something wrong 
there? If there is any more information I can provide, please let me know.

Thank you,
-Kevin



Program output
==

Starting Engine...
Reading Control Data
Using configfile: /some.ini
Populating Data from Database...
DB Connection Handle is null, going to create a new connection!
We have created the handle... initializing.
We have initialized the handle, connecting...
Connecting with parameters: Server: localhost Login: root Pass: X 
DBDatabase: mismiats
*** glibc detected *** ./ksMismi2.out: malloc(): memory corruption: 
0x011d4b40 ***
=== Backtrace: =
/lib/libc.so.6[0x7f8f6919da14]
/lib/libc.so.6(__libc_malloc+0x90)[0x7f8f6919f360]
libmysqlclient.so.16(my_malloc+0x32)[0x7f8f6a6a5032]
libmysqlclient.so.16(vio_new+0x2f)[0x7f8f6a6cd15f]
libmysqlclient.so.16(mysql_real_connect+0xe43)[0x7f8f6a6c9fe3]
./ksMismi2.out[0x51671e]
./ksMismi2.out[0x51678b]
./ksMismi2.out[0xd5cd68]
./ksMismi2.out[0x57d32e]
./ksMismi2.out[0x57dc41]
./ksMismi2.out[0x49a251]
/lib/libc.so.6(__libc_start_main+0xf4)[0x7f8f691471c4]
./ksMismi2.out(__gxx_personality_v0+0x1c2)[0x4994ea]
=== Memory map: 
0040-01012000 r-xp  fe:00 2048027 /usr/local/engine/bin/ksMismi2.out
01112000-011a9000 rwxp 00c12000 fe:00 2048027 /usr/local/engine/bin/ksMismi2.out
011a9000-011f3000 rwxp 011a9000 00:00 0 [heap]
7f8f6000-7f8f60021000 rwxp 7f8f6000 00:00 0
7f8f60021000-7f8f6400 ---p 7f8f60021000 00:00 0
7f8f64c3-7f8f64c3a000 r-xp  fe:00 1171488 /lib/libnss_files-2.7.so
7

[ snipped... ]

7fff72b7d000-7fff72b92000 rwxp 7ffea000 00:00 0 [stack]
7fff72bfe000-7fff72c0 r-xp 7fff72bfe000 00:00 0 [vdso]
ff60-ff601000 r-xp  00:00 0 [vsyscall]




Code
==


MYSQL* dbConn::getDBConnectionHandle ()
{
my_bool reconn = 1;

if (DBconnectionHandle != NULL)
{
while ( !mysql_real_connect( DBconnectionHandle, DBServer.c_str(), 
DBLogin.c_str(), DBPassword.c_str(), DBDatabase.c_str(), DBPort, NULL, 
CLIENT_MULTI_STATEMENTS) ) //Client multi statements used for batched inserts 
of trades.
{
cout  ERROR: Connecting to MYSQL. ERROR Reported from Server is:   
mysql_error(DBconnectionHandle)  endl;
}

mysql_options( DBconnectionHandle, MYSQL_OPT_RECONNECT, (const char*)reconn ); 
// must do twice, bug in mysql lib

return DBconnectionHandle;
}
else
cout  DB Connection Handle is null, going to create a new connection!  
endl;

DBconnectionHandle = new MYSQL();
cout  We have created the handle... initializing.   endl;

if ( mysql_init(DBconnectionHandle) == NULL )
{
cout  ERROR: initializing connection handle: Error from Mysql:   
mysql_error(DBconnectionHandle)  endl;
return NULL;
}

cout  We have initialized the handle, connecting...  endl;
cout  Connecting with parameters: Server:   DBServer   Login:   DBLogin   Pass:   
DBPassword   DBDatabase:   DBDatabase  endl;


mysql_options( DBconnectionHandle, MYSQL_OPT_RECONNECT, (const char*)reconn );

while ( !mysql_real_connect( DBconnectionHandle, DBServer.c_str(), 
DBLogin.c_str(), DBPassword.c_str(), DBDatabase.c_str(), DBPort, NULL, 
CLIENT_MULTI_STATEMENTS) ) //Client multi statements used for batched inserts 
of trades.
{
cout  ERROR: Connecting to MYSQL. ERROR Reported from Server is:   
mysql_error(DBconnectionHandle)  endl;
}

mysql_options( DBconnectionHandle, MYSQL_OPT_RECONNECT, (const char*)reconn ); 
// must do twice, bug 

Re: Deployment of Database in a DEB package.

2008-10-28 Thread Micah Stevens
It's a package for Debian based systems, which include Ubuntu and a few 
others..


It's like an RPM for Fedora/Redhat linux.

-Micah

On 10/27/2008 06:39 PM, Moon's Father wrote:

I'm sorry that if I can ask a question.What is DEB?

On Fri, Oct 3, 2008 at 9:49 PM, US Data Export
[EMAIL PROTECTED]wrote:

  


-Original Message-
From: Ellison, David [mailto:[EMAIL PROTECTED]
Sent: Friday, October 03, 2008 2:35 AM
To: mysql@lists.mysql.com
Subject: RE: Deployment of Database in a DEB package.

That's true, you would need to consider that risk. Can a DEB package ask
for information before deployment? If so you could ask for the
username/password and run the mysql command using the provided info.
Therefore not risking the problem of having a username/password within
the files. Although I am making a big assumption on whether you can grab
input and use that in scripts of course :)

  

[JS] Bash, ksh, and other shells have a read command that can be used to
take input from the console (or other sources).


Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com
www.giiexpress.com
www.etudes-marche.com







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]






  


Re: Relational Databasing on busy webserver (Benchmark of Enum?!)

2008-09-23 Thread Micah Stevens

On 09/23/2008 02:42 PM, Ben A.H. wrote:
I figured that was what you meant... I guess my table didn't work (see above 
message...don't ya' love plaintext :-O)...


Has anyone ever tried to benchmark the difference between utilizing ENUMs 
vs. traditional relational databasing? I would think ENUM is ideal for items 
I specified at the beginning of this thread, items I would think would be 
part of MANY (if not MOST) databases (state, country, gender, industry, 
occupation, referredFrom, ethnicity, position)... In my case, it would allow 
me to eliminate 15+ tables...


I'm just wondering why database ENUMS aren't used more often... (what's the 
catch)
  
Just thought I'd jump in with some terms here: When you're saying 
'traditional relational databasing' is kind of misleading. You're 
probably still relational in some sense of the term even using a ton of 
ENUMs. What you're talking about is fully normalized form. Take a look here:


http://en.wikipedia.org/wiki/Database_normalization

..to answer your question, normalization isn't done for speed. In fact, 
I'd hazard a guess that fully normalized databases are almost always 
slower than if the designed makes some shortcuts. What they provide is a 
fully structured way to organize your data.


If you're just storing data, and you know there's only a limited number 
of ways you're going to pull data out of the set, then I'd make as many 
shortcuts as I could to provide speed. If you are using the dataset for 
analysis, and you're not sure how the users are going to extract data, 
then you may want to go further down the normalization road.


As with anything normalization can reach a point of dubious effect, but 
it serves a very important purpose. By normalizing your data, and not 
using things like ENUMs, you're maintaining an accurate data structure 
which then can be arbitrarily used. But yeah, it's not necessarily fast, 
probably the opposite.


Hope that helps.
-Micah

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Selecting around a circular reference?

2008-09-06 Thread Micah Stevens

Did you try this?

SELECT
   sum(fooditems.carb * mealitems.quantity) as sumcarbs,
   sum(fooditems.gi * ((fooditems.carb * mealitems.quantity) / 
sum(fooditems.carb * mealitems.quantity))),

   sum(fooditems.gl * mealitems.quantity),
   sum(fooditems.cal * mealitems.quantity)
   FROM meals
   INNER JOIN mealitems ON meals.meal_id = mealitems.meal_id
   INNER JOIN fooditems ON mealitems.fooditem_id = fooditems.fooditem_id
   WHERE meals.user_id = '$user_id' AND meals.date = '$meal_date';

-Micah

On 09/05/2008 03:41 PM, Brian Dunning wrote:
I'm trying to calculate glycemic index of all food items eaten in a 
day with a single SELECT. The problem is the calculation for glycemic 
index of each item requires a total of all items' carbs. It's like a 
circular reference. Here's what I'm trying:


SELECT
sum(fooditems.carb * mealitems.quantity) as sumcarbs,
sum(fooditems.gi * ((fooditems.carb * mealitems.quantity) / 
sumcarbs)),

sum(fooditems.gl * mealitems.quantity),
sum(fooditems.cal * mealitems.quantity)
FROM meals
INNER JOIN mealitems ON meals.meal_id = mealitems.meal_id
INNER JOIN fooditems ON mealitems.fooditem_id = fooditems.fooditem_id
WHERE meals.user_id = '$user_id' AND meals.date = '$meal_date';

It replies Unknown column 'sumcarbs' in 'field list'. See what I'm 
trying to do? Is there a way to do this?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Connection failed!!

2008-08-25 Thread Micah Stevens

There's probably a password set. Look up how to reset a password:

http://dev.mysql.com/doc/refman/5.0/en/resetting-permissions.html



On 08/25/2008 03:25 AM, Matthew Stuart wrote:
I have just loaded MySQL 5 and MySQL Front on to a new computer, and I 
am now getting a MySQL-Error which is:


Connection failed:
1045 - Access denied for user 'root'@'localhost' (using password: NO)

My Old pc has exactly the same settings but that uses MySQL 4

I have done a system restore to the point before I installed MySQL 5, 
and then installed MySQL 4 and that went on with no problem, and I am 
able to make a connection with MySQL Front! So, what's different in 
the processes of installation between v4 and v5 that stops me making a 
connection with v5? What have I done wrong.


In both instances, I have used 'localhost' as my Hostname and 'root' 
as my username, and 'root' as a password for v4 and 'NO' password for 
v5 (because I didn't see anywhere where it asked for it - I think).


I am using MySQL Front because I'm not Neo and I don't see the world 
in zero's and one's ;)



Mat




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: debian-sys-maint

2008-01-03 Thread Micah Stevens
This is one of the few decisions the debian package maintainers made 
that I disagree with, but the idea is that when you install mysql, there 
is this user created with a random password. This gives the package 
maintainers a way to script updates in SQL if necessary to run on the 
database during an upgrade.


The SysV startup script uses this account somehow (I forget now and I'm 
too lazy to open it up in a text editor) but that's what's generating 
the warning. Not MySQL. It's just a shell script warning.


Edit the startup script and remove the user. It won't affect anything 
but you may get some errors during an upgrade at some point if they 
decide to use it. It's not a big security issue unless someone gets root 
access to the server, but if that's the case, I don't think mysql 
security will rank among the greatest of your worries.


-Micah

On 01/03/2008 09:23 PM, Krishna Chandra Prajapati wrote:

Hi all,

Mysql on debian operating system has debian-sys-maint user. What is the role
of this user debian-sys-maint.
After stopping mysql. If i delete all the things from mysql data directory.
Executed mysql_install_db to create mysql directory with mysql  data
directory and start mysql server. It gives warning it don't found the
debian-sys-maint user.

I am trying to restart and stop mysql server, but it is not working.
What can be the issue.

Thanks,
  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Data directory in 2 platforms

2007-12-27 Thread Micah Stevens
On a dual boot it should work okay. I've done a similar thing, by taking 
the data folder from a Linux installation, copying it to a local windows 
computer and using a local install (same version of course) to read it. 
It worked fine. I would think the scenario is much the same as what 
you're suggesting.


Any issues with the file system in Linux would be the same in windows as 
well (filename length, etc..) so if it's working in windows, it should 
work fine in Linux too.


However, before committing, try it out with backed up data of course!

-Micah

On 12/24/2007 01:15 PM, Ahmad AlTwaijiry wrote:

Good morning everyone,

If I want to run MySQL server in Windows and Linux OS (dual boot 
desktop),  but I want to make the data directory (the one that has all 
the databases tables and information) in one shared VFAT filesystem so 
if I create a database while I'm running in Linux and then reboot to 
windows and start mysql server I can see the database with all the 
tables from windows


my concerns are:

1. is it possible to share the data directory in more than one OS ?
2. Can MySQL run in Linux with data directory running in vFat 
filesystem ? (I know it will run fine in windows)


Thank you for reading my email and have a nice day :).




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Determining Table Storage Engine Type on Crashed Table

2007-11-22 Thread Micah Stevens

Look at the data files. The extension of the file will tell you.


On 11/21/2007 12:42 PM, Richard Edward Horner wrote:

Hey everybody,

Hopefully some of you are already enjoying time off. I am not...yet :)

Anyway, is there a way to determine what storage engine a table is
using if it's crashed?  When it's fine, I can just run:

mysql show table status like 'table_name';
+-++-++++-+--+--+---++-+-+-+---+--++-+
| Name| Engine | Version | Row_format | Rows   |
Avg_row_length | Data_length | Max_data_length  | Index_length |
Data_free | Auto_increment | Create_time | Update_time
| Check_time  | Collation | Checksum | Create_options
| Comment |
+-++-++++-+--+--+---++-+-+-+---+--++-+
| table_name | MyISAM |  10 | Fixed  | 985984 | 13
|12817792 | 3659174697238527 | 34238464 | 0 |
1182153 | 2007-11-15 17:44:28 | 2007-11-21 15:28:07 | 2007-11-21
15:28:18 | latin1_swedish_ci | NULL || |
+-++-++++-+--+--+---++-+-+-+---+--++-+
1 row in set (0.00 sec)

As you can see, the second column returned is the Engine. In this
case, MyISAM. Now, if I crash the table, it doesn't work:

mysql show table status like 'table_name';
+-++-++--++-+-+--+---++-+-++---+--+++
| Name| Engine | Version | Row_format | Rows | Avg_row_length
| Data_length | Max_data_length | Index_length | Data_free |
Auto_increment | Create_time | Update_time | Check_time | Collation |
Checksum | Create_options | Comment
|
+-++-++--++-+-+--+---++-+-++---+--+++
| table_name | NULL   |NULL | NULL   | NULL |   NULL |
   NULL |NULL | NULL |  NULL |
NULL | NULL| NULL| NULL   | NULL  | NULL |
NULL   | Table './blah/table_name' is marked as crashed and
should be repaired |
+-++-++--++-+-+--+---++-+-++---+--+++
1 row in set (0.00 sec)

Now, let's assume for a moment this were an InnoDB table. If I were to
try and run repair, it would say that the storage engine does not
support repair so clearly it knows what the storage engine is. How do
I get it to tell me? Or I guess a broader more helpful question would
be, What are all the ways to determine a table's storage engine
type?

Thanks,
  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Memory Problems

2007-05-15 Thread Micah Stevens
I think you may be able to get around this by using multiple key 
buffers? (MySQL 4.1 or later)


-Micah

On 05/15/2007 01:24 AM, Christoph Klünter wrote:

Hi List,

We have a mysql-Server with 8G of Ram. But mysql doesn't use this ram.
 But we get following error:

May 14 22:56:11 sql mysqld[5875]: 070514 22:56:10 [ERROR] /usr/sbin/mysqld: Got 
error 12 from storage engine
May 14 22:56:11 sql mysqld[5875]: 070514 22:56:10 [ERROR] /usr/sbin/mysqld: 
Sort aborted

I have set the sort_buffer_size to 1G but even this doesn't help.
Any hints ? Should we try a 64Bit-OS ?

Regards
 Christoph


Kernel is a 2.6.18-3-686-bigmem from debian-etch.

mysql show status;
+++
| Variable_name  | Value  |
+++
| Aborted_clients| 103352 |
| Aborted_connects   | 3  |
| Binlog_cache_disk_use  | 0  |
| Binlog_cache_use   | 0  |
| Bytes_received | 2985193088 |
| Bytes_sent | 3725769917 |
| Com_admin_commands | 5377515|
| Com_alter_db   | 0  |
| Com_alter_table| 8  |
| Com_analyze| 0  |
| Com_backup_table   | 0  |
| Com_begin  | 0  |
| Com_change_db  | 5434526|
| Com_change_master  | 0  |
| Com_check  | 0  |
| Com_checksum   | 0  |
| Com_commit | 0  |
| Com_create_db  | 0  |
| Com_create_function| 0  |
| Com_create_index   | 0  |
| Com_create_table   | 0  |
| Com_dealloc_sql| 0  |
| Com_delete | 154176 |
| Com_delete_multi   | 0  |
| Com_do | 0  |
| Com_drop_db| 0  |
| Com_drop_function  | 0  |
| Com_drop_index | 0  |
| Com_drop_table | 0  |
| Com_drop_user  | 0  |
| Com_execute_sql| 0  |
| Com_flush  | 2  |
| Com_grant  | 0  |
| Com_ha_close   | 0  |
| Com_ha_open| 0  |
| Com_ha_read| 0  |
| Com_help   | 0  |
| Com_insert | 473672 |
| Com_insert_select  | 0  |
| Com_kill   | 0  |
| Com_load   | 0  |
| Com_load_master_data   | 0  |
| Com_load_master_table  | 0  |
| Com_lock_tables| 0  |
| Com_optimize   | 0  |
| Com_preload_keys   | 0  |
| Com_prepare_sql| 0  |
| Com_purge  | 2  |
| Com_purge_before_date  | 0  |
| Com_rename_table   | 0  |
| Com_repair | 0  |
| Com_replace| 0  |
| Com_replace_select | 0  |
| Com_reset  | 0  |
| Com_restore_table  | 0  |
| Com_revoke | 0  |
| Com_revoke_all | 0  |
| Com_rollback   | 0  |
| Com_savepoint  | 0  |
| Com_select | 14627137   |
| Com_set_option | 450|
| Com_show_binlog_events | 0  |
| Com_show_binlogs   | 13 |
| Com_show_charsets  | 112|
| Com_show_collations| 112|
| Com_show_column_types  | 0  |
| Com_show_create_db | 0  |
| Com_show_create_table  | 42 |
| Com_show_databases | 10 |
| Com_show_errors| 0  |
| Com_show_fields| 111|
| Com_show_grants| 46 |
| Com_show_innodb_status | 0  |
| Com_show_keys  | 56 |
| Com_show_logs  | 0  |
| Com_show_master_status | 0  |
| Com_show_new_master| 0  |
| Com_show_open_tables   | 0  |
| Com_show_privileges| 0  |
| Com_show_processlist   | 0  |
| Com_show_slave_hosts   | 0  |
| Com_show_slave_status  | 0  |
| Com_show_status| 4930   |
| Com_show_storage_engines   | 7  |
| Com_show_tables| 250|
| Com_show_variables | 4708   |
| Com_show_warnings  | 0  |
| Com_slave_start| 0  |
| Com_slave_stop | 0  |
| Com_truncate   | 1  |
| Com_unlock_tables  | 0  |
| Com_update | 1088621|
| Com_update_multi   | 0  |
| Connections| 61722  |
| Created_tmp_disk_tables| 6036   |
| 

Re: Max columns in a tabel in MyISAM storage engine

2007-04-17 Thread Micah Stevens

Fabian Köhler wrote:

Hello,

i have table with answers to questions. Every answer is a column in the table. 
i.e.

id|q1|q2|q3
1|answer1|answer2|answer5
2|answer3|answer4|asnwer6

another option to save it would be sth like this:

id|field|value
1|q1|answer1
1|q2|answer2
1|q3|answer5
2|q1|answer3
...

The last one is not really useable when working with large amounts of data, 
when you want to select i.e. 200 questions with answers it's 200*nof answers 
queries to get them.

  
Really? Wow, my opinion is that you're trying to do in one table what 
you should do in two. Have a questions table, and an answers table. The 
answers table would have a column specifying which question they belong 
to. i.e.


QUESTIONS
int autoincrement questionID
question

ANSWERS
int autoincrement answerID
int questionID
answer

Then if you want all the answers to a question, you just grab them:

select * from answers where questionID = '100'

Or, you can do a join, and get the question information in the same query.

-Micah

The problem with the first solution is, that MyISAM storage engine is limited 
to 2599 columns i think. So what's happening if i have more answers than 
columns available?

Does anybody have an idea? VIEWs or GROUP_CONCAT doesn't seem to be an option, they are 
all to slow. What's the right way to store and select such information?

Thank you very much.

regards,

Fabian

--
Fabian Köhler  http://www.fabiankoehler.de

  



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Max columns in a tabel in MyISAM storage engine

2007-04-17 Thread Micah Stevens

On 04/17/2007 04:18 AM, Fabian Köhler wrote:

Thanks for all the input. The problem i have with this idea:

  
Really? Wow, my opinion is that you're trying to do in one table what 
you should do in two. Have a questions table, and an answers table. The 
answers table would have a column specifying which question they belong 
to. i.e.


QUESTIONS
int autoincrement questionID
question

ANSWERS
int autoincrement answerID
int questionID
answer



is, that
1) i must define the column with the value of the answer as a TEXT to
cover all possible answers, altough a SMALLINT might fit it better. but
i can live with that
  
Not a big deal. Any performance increase will be marginal, and the ease 
of having a standard column type in my opinion outweighs any performance 
decrease.

2) i think it's slow when i have lots of data. i'm currently not at home
to do performance tests, but let's assume i have 2000 questions, then i
collect by user input 2000 answers to every question by 1 different
users (a realistic scenario for our application) and store it in a table
like this:

  



ANSWERS
int autoincrement answerID
int userID
int questionID
text answer 


i then have 2000*1 = 20 mill. rows in ANSWERS. now let's assume i
want to select all answers to all questions for 100 random users at
once. The only solution i see here for one query is a large join which
will take really, really long to get the data even with proper indexes
or did i do sth. wrong?

  
How is this not a processor intensive operation no matter what? If 
you're collecting this much data, you have to deal with that much data, 
the key is to arrange is to that the database knows something about the 
relationships between the data (questions * answers) in this case so 
that you can generate efficient queries.

is there a way to provide a view on these two tables (QUESTIONS and
ANSWERS) that simulate the original table i intended, which is actually
fast even with large amounts of data?
  

Yes, with either a VIEW or a JOIN.
Are other dbs like oracle better on doing joins on such large tables? 

  
I can't speak for MySQL 5.0 because it's been a while since I've done 
performance tests, but the old 3.x versions of MySQL would perform much 
better than oracle with simple joins.

So i really like the idea of having one large table with many columns
because it's really fast and i can define each column properly but i
currently see no option to go this way but with that way there will
alway be a limit to the max. amount of answers in the system.

  
If you're doing that, why not just make a flat-file database of the 
answers? That would be much faster still than dealing with the added 
overhead of a SQL server. You're ignoring the advantages of having a SQL 
setup in this case anyhow.


-Micah



Re: renaming database

2007-04-12 Thread Micah Stevens

Yeah, in SQL:

RENAME DATABASE start_name TO new_name;

-Micah

On 04/12/2007 01:34 AM, Octavian Rasnita wrote:

Hi,

How can I rename a database if it contains InnoDB tables?

I have tried renaming the directory where it is located, but it 
doesn't work this way.


Is there a method that works faster than dumping it with mysqldump 
then re-create it under another name?


Thanks.

Octavian




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: question about Queries per second avg

2007-04-06 Thread Micah Stevens
It's just telling you how many queries per second on average the server 
is receiving. This says nothing about how long it takes to execute a 
particular query.


-Micah

On 04/06/2007 01:22 AM, C.R.Vegelin wrote:

Hi List,

Using printf( System status: %s\n, mysqli_stat($link));
in a PHP script, says: Queries per second avg: 0.051.
This means that a query takes about 20 seconds ?
But the query result is given immediately.
How should I interpret Queries per second avg ?
I'm using version 5.0.15 NT.

Regards, Cor

  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Joins versus Grouping/Indexing: Normalization Excessive?

2007-04-02 Thread Micah Stevens
I think you're approaching this from the wrong angle. You'll want to put 
the data at the highest level at which it changes.


i.e. If every song on an album is always the same year, put it at the 
album level, however, if it changes from song to song on a particular 
album, then you want it at the song level.


Year wouldn't ever apply to artist I don't think, unless they're truly a 
one hit wonder. :)


-Micah

On 04/02/2007 09:14 PM, Daniel Cousineau wrote:

So I'm currently designing a database for a web site and intra net for my
campuses student radio. Since I'm not getting paid for this and I'm doing
this in my free time I kinda want to take my time and have the system 
setup

as perfectly as any one college student can.

I'm currently debating on how I want to store all the songs in the 
system.

Namely I'm not quite sure which way will eek the most performance out of
MySQL.

My plan so far is quite normalized, a songs table with 1:1 relationships
with an Albums, Artists, and Genres table.

The big benefits I see from this is when I'm building the intra net
application I'll want to pull just all of the artists or all of the 
albums,

etc. However I feel like I'm encountering issues with where to store the
year field of an mp3 (do I want it on the album, song, or both) 
along with

issues like printing everything out at once.

The only other way I can think of thats relatively efficient is to 
have the

singular songs table and have indexes on albums, artists, and genres.

My question, more out of curiosity than necessity, is which of these 
would

be more efficient (given that I'll be using the InnoDB storage engine)?

Other relevant facts include it'll be using the latest, stable release of
MySQL 5 and I'll be developing in PHP5 (through CakePHP's database
abstraction layer).



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: CVS-Like System For Database Changes

2007-04-01 Thread Micah Stevens
Using ALTER statements would make it tough to get a complete view. I 
would stick with your original idea. This would enable diffs to work 
nicely, and the latest revision would contain everything you need to 
know to create the database.


-Micah

On 04/01/2007 07:11 AM, Miles Thompson wrote:

Michael,

We're about to head down the same road, using Subversion, and my 
thought was to
initially capture a series of CREATE TABLE statements and store them 
all in one file.


Then as schema was modified on the development server, update those 
statements using SVN.


Your idea looks a lot better, may I presume to outline how I think you 
use it?
I'm assuming you capture, for each table, an initial CREATE TABLE,  
and save it in a file. Then as the schema changes you update the file 
with the ALTER TABLE statements, commiting the changes.


If you have to recreate the database, you execute the file up to the 
last change point.


I suppose you could do the same thing to maintain the data stored in 
lookup tables.


We're using Joomla! and have extended it quite a bit, and are now 
running three databases - dev, beta and since last week, live.


Later this week I'll be moving myself and one other developer to an 
SVN environment; we will see how it goes.



Cheers - Miles Thompson


At 07:04 PM 3/30/2007, Michael Dykman wrote:


We keep all of the schema (one file per table) in SVN (subversion)
with a directory to represent each database.  As the schema evolves,
we have had no trouble tracking changes through resource history and
are able to extract diffs on every commited change.  It works like a
charm and would proably work equally as well with CVS.

- michael


On 3/30/07, Tim Gustafson [EMAIL PROTECTED] wrote:

Hello!

I'm just getting in to using CVS to track changes to my source code 
for PHP
projects that I'm working on, and it's exactly what my organization 
needed.


However, there does not appear to be a way to track changes to mySQL
databases in the same way.  Basically, as the structure of tables are
changed to meet the requirements of new features, I'd like a way to 
be able
to record those changes (both structural table changes and also 
default
table data such as table of states or zip codes or whatever) in a 
CVS-type
system (preferably integrated with CVS directly) so that when a 
customer
uses CVS to get the newest version of the code for their project, 
they can
also get (and automatically apply) all changes to their database for 
the new

version.

Does such a system exist?  How do other people cope with these types of
updates?

Thanks for any guidance!

Tim Gustafson
(831) 425-4522 x 100
(831) 621-6299 Fax


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
- michael dykman
- [EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: some questions about recovering/migrating databases

2007-04-01 Thread Micah Stevens

On 04/01/2007 09:06 AM, Jonathan Horne wrote:

i have a production system running FreeBSD 6.2-p3/MySQL 5.0.33, with 2 
databases.  i also have a development box, which is pretty much a mirror of my 
production system, that i would like to import my databases into.  daily, the 
production system dumps the databases to .sql files, as well as is also backed 
up by a netbackup server to tape (entire filesystem is backed up.  right now 
the dev system has had no databases created in it.

is it possible to restore from tape, the /var/db/mysql directory from the 
production into the dev system (ie, and would it start and load those 
databases?)  or, is really the only viable way to migrate or restore a 
database, is to create the blank database, and then reload the dump?

  
In short, yes, both methods are possible. Although you have to realize 
that not all the data is in the data files all the time, nor is it 
necessarily in a consistent state, so to maintain a backup using these 
files, you will need to shut down the database server to ensure that all 
the data/changes have been written to these files from memory and cache. 
This can pose a problem in a production system for obvious reasons.


For this reason I backup using a SQL dump, you just need to lock the 
tables and go. It compresses nicely too.


-Micah

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: CVS-Like System For Database Changes

2007-04-01 Thread Micah Stevens
Sounds like perhaps an unnecessary complication, why would this be 
better than the root SQL CREATE statements?


-Micah

On 04/01/2007 12:41 PM, Anoop kumar V wrote:

DDLUTILS is what you need:

check this out:
http://db.apache.org/ddlutils/

and better still (if you are using Ant as a build tool):
http://db.apache.org/ddlutils/ant/

Then you can store these ant scripts in your VCS (version control 
system).

To create or destroy the schema with data just run an ant target and you
would be done.

Anoop


On 4/1/07, Micah Stevens [EMAIL PROTECTED] wrote:


Using ALTER statements would make it tough to get a complete view. I
would stick with your original idea. This would enable diffs to work
nicely, and the latest revision would contain everything you need to
know to create the database.

-Micah

On 04/01/2007 07:11 AM, Miles Thompson wrote:
 Michael,

 We're about to head down the same road, using Subversion, and my
 thought was to
 initially capture a series of CREATE TABLE statements and store them
 all in one file.

 Then as schema was modified on the development server, update those
 statements using SVN.

 Your idea looks a lot better, may I presume to outline how I think you
 use it?
 I'm assuming you capture, for each table, an initial CREATE TABLE,
 and save it in a file. Then as the schema changes you update the file
 with the ALTER TABLE statements, commiting the changes.

 If you have to recreate the database, you execute the file up to the
 last change point.

 I suppose you could do the same thing to maintain the data stored in
 lookup tables.

 We're using Joomla! and have extended it quite a bit, and are now
 running three databases - dev, beta and since last week, live.

 Later this week I'll be moving myself and one other developer to an
 SVN environment; we will see how it goes.


 Cheers - Miles Thompson


 At 07:04 PM 3/30/2007, Michael Dykman wrote:

 We keep all of the schema (one file per table) in SVN (subversion)
 with a directory to represent each database.  As the schema evolves,
 we have had no trouble tracking changes through resource history and
 are able to extract diffs on every commited change.  It works like a
 charm and would proably work equally as well with CVS.

 - michael


 On 3/30/07, Tim Gustafson [EMAIL PROTECTED] wrote:
 Hello!

 I'm just getting in to using CVS to track changes to my source code
 for PHP
 projects that I'm working on, and it's exactly what my organization
 needed.

 However, there does not appear to be a way to track changes to mySQL
 databases in the same way.  Basically, as the structure of tables 
are

 changed to meet the requirements of new features, I'd like a way to
 be able
 to record those changes (both structural table changes and also
 default
 table data such as table of states or zip codes or whatever) in a
 CVS-type
 system (preferably integrated with CVS directly) so that when a
 customer
 uses CVS to get the newest version of the code for their project,
 they can
 also get (and automatically apply) all changes to their database for
 the new
 version.

 Does such a system exist?  How do other people cope with these types
of
 updates?

 Thanks for any guidance!

 Tim Gustafson
 (831) 425-4522 x 100
 (831) 621-6299 Fax


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



 --
 - michael dykman
 - [EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: CVS-Like System For Database Changes

2007-04-01 Thread Micah Stevens

On 04/01/2007 03:28 PM, Anoop kumar V wrote:
Sql create statements need to be run using a compatible client. 
sqlplus for
oracle, mysqlclient for mysql etc.. Here you just have a target as 
part of
your routine build that also takes care of building / renewing your 
database

with (or w/o) data.

This wouldn't change anyhow, you'd still need a client during the build 
process. You're just automating the control of the client, which IMHO is 
only a good thing in certain circumstances. I agree it can be useful, 
but not in all cases.
Plus a layer of abstraction such as a ant for everything development 
related
allows you to integrate into system integration tools like cruise 
control /
continuum etc.. So you automate most of the stuff: building your 
database,
testing against code etc... The investment is marginal and only during 
the
setup of these tools, but the gains are phenomenal. (just like the 
benefits

realized with setting up cvs and all)
http://www.martinfowler.com/articles/continuousIntegration.html
http://www.zorched.net/2006/08/19/relentless-build-automation/ (scroll 
down

to the database part)


I'll take a look at these articles, thank you.



Not to digress - but  I would advise (strongly) the author to consider 
svn

instead of cvs (svn: subversion is the new cvs built fresh from bottom
keeping in mind the deficiencies of cvs)
http://subversion.tigris.org/


Agreed, my experience with Subversion has been a pleasurable one.

-Micah

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unable to recover a mediawiki database

2007-03-31 Thread Micah Stevens
Did you shut down the database when you backed up the data folder, or 
was it running? Sounds like file corruption to me which can occur from 
grabbing those data files while they're being used by the database.


If the table type is innoDB, I'm not much use, but if they're MyISAM, 
the command line rebuilding tools are where I'd start.


-Micah

On 03/31/2007 06:09 AM, Francois Colonna wrote:

Hello

I am using MySql to store two MediaWiki databases named dftdb and 
framesdb

I am using MySql 4.1.20
I am using MediaWiki 1.9.3
I am using Scientific Linux 4.4

Each week I backup the directory /var/lib/mysql
On Wednesday I lost the /var partition, the mysql version,
the apache and php5, MediaWiki was the only one to stay alive.


I reinstalled the same versions of mysql, apache, php5 that were
running before the crash.

I copy the the saved /var/lib/mysql directory

A third old wiki running under Mediawiki 1.3 was correctly recovered
but, the new two new Wikis running under MediaWiki 1.9.3 I give the 
following message :



 MediaWiki internal error.

Original exception: exception 'DBQueryError' with message 'A database 
error has occurred
Query: SELECT value,exptime FROM `dft_objectcache` WHERE 
keyname='dftdb-dft_:messages-hash'

Function: MediaWikiBagOStuff::_doquery
Error: 1016 Can't open file: 'dft_objectcache.ibd' (errno: 1) (localhost)
' in /home/dft/public_html/mediawiki/includes/Database.php:708


under phpMyAdmin these databases have no Type defined for the 33 tables
excepted 3 which have MyiSAM type.
The Collation column give in use.

If somebody could tell me what happenned ?
How a running database could become unreadable after reinstalling 
Mediawiki.

Is it possible that this were connected with the InnoDb type which is
the default used by phpMyAdmin at database creation ?

Thanks

Francois Colonna



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Sorting Problem

2007-03-26 Thread Micah Stevens
The query using JOIN syntax (you can read about this in the 
documentation) describes the interelationship between the three tables 
you described, in this way you can select information  based on a WHERE 
clause as it relates to the category table, while still ordering by the 
business table.


Read up on joins, that may help.

-Micah

Sid Price wrote:

I am sorry but I don't understand the select query you wrote, could you
perhaps explain how it works or point me to a reference that might help me
understand it?

Many thanks for responding,
Sid.

Sid Price Software Design
http://www.softtools.com
 
-Original Message-
From: Micah Stevens [mailto:[EMAIL PROTECTED] 
Sent: Sunday, March 25, 2007 9:23 PM

To: [EMAIL PROTECTED]; mysql@lists.mysql.com
Subject: Re: Sorting Problem

This doesn't work?

SELECT businesses.name from businesses
left join links using (businessID)
left join categories using (categoryID)
where category.name = 'something'
order by businesses.name ASC



On 03/25/2007 12:40 PM, Sid Price wrote:
  

Hello,

 


I have a MySQL database design that provides an online business directory.
It consists of three tables; one with the names and addresses of the
businesses (names), one with the categories of businesses (categories),


and
  

one that has an entry (a category ID and a business ID) for each
business/category pairing (entries), a business may be in multiple
categories.

 


The problem I am having is after having queried the entries table for


all
  

the entries for a given category I query the names table for each entry


to
  

display the business name and address, I can not figure a way to sort the
displayed data by company name, The entries table holds the business ID
and category ID so sorting the entries that match the category doesn't


help.
  
 


Do I need to create a temporary table to hold the business names/addresses
in a category so that I can then sort it?

 


Thanks,

Sid.

 


Sid Price Software Design

http://www.softtools.com

 

 



  



  


Re: Getting SQL errors porting databases between MySQL v4 and v5

2007-03-26 Thread Micah Stevens
group is a  reserved word, so MySQL thinks you're attempting a 'group 
by' statement. Put backticks around group, you should always quote your 
table and column names.


DROP TABLE IF EXISTS `admission_quotes`;
CREATE TABLE `admission_quotes` (
 `id` int(4) NOT NULL auto_increment,
 `quote` text,
 `author` text,
 `category` text,
 `class` text,
 `active` text,
 `group` varchar(5) default NULL,
 PRIMARY KEY  (id)
) TYPE=MyISAM;


-Micah
On 03/26/2007 10:31 AM, Rob Tanner wrote:

Hi,

I am porting over 6 databases from a MySQL v4 installation to a MySQL 
v5 installation and getting an SQL error in the process.


I am using the following command to dump the data in the v4 installation:

mysqldump  -u root --password=secret  --add-drop-table --databases db1 
db2 db3 db4 db5 db6   db.sql


and using the following command to upload the databases on to the v5 
installation:


mysql -u root -p  db.sql

But then I get this error:

ERROR 1064 (42000) at line 140784: You have an error in your SQL
syntax; check the manual that corresponds to your MySQL server
version for the right syntax to use near 'group varchar(5) default
NULL,
  PRIMARY KEY  (id)
) TYPE=MyISAM' at line 8



The whole create table sequence from the db.sql file is:

DROP TABLE IF EXISTS admission_quotes;
CREATE TABLE admission_quotes (
  id int(4) NOT NULL auto_increment,
  quote text,
  author text,
  category text,
  class text,
  active text,
  group varchar(5) default NULL,
  PRIMARY KEY  (id)
) TYPE=MyISAM;


Any idea what the issue is?  Is there something special I need to do 
when porting the databases between MySQL v4 and v5?


Thanks,
Rob


--
Rob Tanner
UNIX Services Manager
Linfield College, McMinnville OR



Re: Sorting Problem

2007-03-25 Thread Micah Stevens

This doesn't work?

SELECT businesses.name from businesses
left join links using (businessID)
left join categories using (categoryID)
where category.name = 'something'
order by businesses.name ASC



On 03/25/2007 12:40 PM, Sid Price wrote:

Hello,

 


I have a MySQL database design that provides an online business directory.
It consists of three tables; one with the names and addresses of the
businesses (names), one with the categories of businesses (categories), and
one that has an entry (a category ID and a business ID) for each
business/category pairing (entries), a business may be in multiple
categories.

 


The problem I am having is after having queried the entries table for all
the entries for a given category I query the names table for each entry to
display the business name and address, I can not figure a way to sort the
displayed data by company name, The entries table holds the business ID
and category ID so sorting the entries that match the category doesn't help.

 


Do I need to create a temporary table to hold the business names/addresses
in a category so that I can then sort it?

 


Thanks,

Sid.

 


Sid Price Software Design

http://www.softtools.com

 

 



  


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: max_rows query + SegFaulting at inopportune times

2007-03-22 Thread Micah Stevens
This table size is based on your filesystem limits. This is a limit of 
the OS, not MySQL.


-Micah

On 03/22/2007 01:02 PM, JP Hindin wrote:

Addendum;

On Thu, 22 Mar 2007, JP Hindin wrote:
  

Zero improvement. I used the following CREATE:
MAX_ROWS=10;



At first I thought I had spotted the obvious in the above - the MAX_ROWS I
used is smaller than the Max_data_length that resulted, presumably MySQL
being smarter than I am. So I changed the MAX_ROWS to use larger numbers,
ala:
AVG_ROW_LENGTH=224,
MAX_ROWS=2000;

But after creation the 'SHOW STATUS' gives the following:

 Create_options: max_rows=4294967295 avg_row_length=224

My guess is that MySQL has decided 4294967295 is the maximum table size
and ALTERs nor CREATE options are able to change this imposed limit. This
would explain why my ALTERs didn't appear to work, seg fault of the client
aside.

So I suppose the question now is - if MAX_ROWS doesn't increase the table
size, what will? Where is the limit that MySQL is imposing coming from?

Again, many thanks for anyone who can enlighten me as to what MySQL is
thinking.

JP



  


Re: max_rows query + SegFaulting at inopportune times

2007-03-22 Thread Micah Stevens
Oh, I didn't see the first comment. My mistake. It's likely a 32bit 
integer size limit of some sort then. 32bit = 4gbytes


-Micah

On 03/22/2007 02:08 PM, JP Hindin wrote:

Micah;

In the first eMail I mentioned that I had excluded filesystem size limits
by manually producing a 14GB tar file. If it was only that simple :)

On Thu, 22 Mar 2007, Micah Stevens wrote:
  

This table size is based on your filesystem limits. This is a limit of
the OS, not MySQL.

-Micah

On 03/22/2007 01:02 PM, JP Hindin wrote:


Addendum;

On Thu, 22 Mar 2007, JP Hindin wrote:

  

Zero improvement. I used the following CREATE:
MAX_ROWS=10;



At first I thought I had spotted the obvious in the above - the MAX_ROWS I
used is smaller than the Max_data_length that resulted, presumably MySQL
being smarter than I am. So I changed the MAX_ROWS to use larger numbers,
ala:
AVG_ROW_LENGTH=224,
MAX_ROWS=2000;

But after creation the 'SHOW STATUS' gives the following:

 Create_options: max_rows=4294967295 avg_row_length=224

My guess is that MySQL has decided 4294967295 is the maximum table size
and ALTERs nor CREATE options are able to change this imposed limit. This
would explain why my ALTERs didn't appear to work, seg fault of the client
aside.

So I suppose the question now is - if MAX_ROWS doesn't increase the table
size, what will? Where is the limit that MySQL is imposing coming from?

Again, many thanks for anyone who can enlighten me as to what MySQL is
thinking.

JP