Getting the last item in a group by query?

2008-03-04 Thread Esbach, Brandon

There is likely a blindingly obvious solution to this, I need to do a
group by expression in my query and get the latest row based on a date
field in the same table.  
Is this even possible, and any tips on how to do that?

Example of data and query:

---
Table: 
=(pseudo table based on origional, trimmed)=
'id', 'bigint(20)', '', 'PRI', '', 'auto_increment'
'date', 'datetime', '', '', '-00-00 00:00:00', ''
'serial_number', 'varchar(25)', '', '', '', ''
'pass', 'varchar(6)', '', '', 'false', ''
=
SELECT 
 t.pass, t.id
FROM 
theTable t group by t.serial_number
---
I have tried putting a sort into the query like so:
SELECT 
 t.pass, t.id
FROM 
theTable t 
GROUP BY
t.serial_number
ORDER BY
t.date desc 


Which naturally only sorts the resulting rows by date.


RE: Getting the last item in a group by query?

2008-03-04 Thread Esbach, Brandon
Hmm didn't notice that replies are sent to personal emails :o!
I'll look down that avenue once I've completed the mysql version upgrade
(mysql 4 on my test bench, mysql5 upgrade in progress on the production
server)

Thanks again!

-Original Message-
From: Daniel Brown [mailto:[EMAIL PROTECTED] 
Sent: 04 March 2008 16:56
To: Esbach, Brandon; MySQL User Group
Subject: Re: Getting the last item in a group by query?

On Tue, Mar 4, 2008 at 11:24 AM, Esbach, Brandon
[EMAIL PROTECTED] wrote:
 Thanks for the reply,

  Sorry, should have been more specific on that :).
  I need to access the last record by date for each serial_number in 
 the  table (not just latest record)

Okay, this is untested, so I don't know if it will work
Out-Of-The-Box[tm] or not, but it should at least lead you in the right
direction.  Plus, it's back on-list now, so that others can read the
results in the archives when they search on the web.  ;-)

SELECT t.pass, t.id FROM theTable t WHERE t.serial_number IN (SELECT
DISTINCT serial_number,date FROM theTable ORDER BY date DESC LIMIT 0,1);

--
/Dan

Daniel P. Brown
Senior Unix Geek
? while(1) { $me = $mind--; sleep(86400); } ?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Backup table structure, not data

2008-02-18 Thread Esbach, Brandon
Is there any way to backup a complete database structure
(tables/fields/indexes/etc), without the data?  Or even get a creation
script per table?

At present the only way I can think of is to restore a backup to another
server and just delete records (a legacy database with data hitting over
12GB, might take some time) - but there's gotta be an easier way to do
it...


RE: Backup table structure, not data

2008-02-18 Thread Esbach, Brandon
I ended up using mysqldump, worked well enough for what I was doing. 
I have to admit, I'm really surprised there's no way to do this directly
in the admin gui that comes with MySQL though :).

Thanks all for the replies!


-Original Message-
From: Richard Heyes [mailto:[EMAIL PROTECTED] 
Sent: 18 February 2008 11:24
To: Esbach, Brandon
Cc: MySQL User Group
Subject: Re: Backup table structure, not data

 Is there any way to backup a complete database structure 
 (tables/fields/indexes/etc), without the data?  Or even get a creation

 script per table?
 
 At present the only way I can think of is to restore a backup to 
 another server and just delete records (a legacy database with data 
 hitting over 12GB, might take some time) - but there's gotta be an 
 easier way to do it...

mysqldump has a --no-data option. Try man mysqldump (assuming your
MySQL server is Unix based.

--
Richard Heyes
http://www.websupportsolutions.co.uk

Knowledge Base and Helpdesk software hosted for you - no installation,
no maintenance, new features automatic and free

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Two/more seperately unique columns in a table?

2007-09-04 Thread Esbach, Brandon
Baron,

Just feedback - worked perfectly.
Thanks again! 

-Original Message-
From: Esbach, Brandon 
Sent: 31 August 2007 14:46
To: Baron Schwartz
Cc: MySQL User Group
Subject: RE: Two/more seperately unique columns in a table?

Hi, and thanks Baron;
I should have been a bit clearer on the bulk insert - I am using a bulk
insert statement, as you assumed.
I'll put this onto the db server and check, I think that's a more future
proof method.  Will this affect any of my linked tables (linked via the
row's primary key(id))?

-Original Message-
From: Baron Schwartz [mailto:[EMAIL PROTECTED]
Sent: 31 August 2007 14:28
To: Esbach, Brandon
Cc: MySQL User Group
Subject: Re: Two/more seperately unique columns in a table?

Hi,

Esbach, Brandon wrote:
 Hi all,
 
 I'm guessing this is a fairly common question, but I've drawn a blank 
 so far with all workarounds possible.
 Here's the rough scenario:
 We have data streaming in constantly from various offsite locations.
 This comprises of several details: a session, a unit of that session, 
 and a measurement of that unit.
 Data is captured and transferred on a timed process, so often data is 
 repeated in the data transfer packets.  This repeating is unavoidable 
 as the software used to capture dumps this data for each measurement 
 and each unit for the session it's working on.
 
 Due to the volume, a bulk update is done using values() with an 
 insert statement.
 
 Unfortunately, often there are repeats of either session, unit, or 
 measurement (as there could be new measurement or unit for the capture

 that needs to be associated to the session).
 The problem I've been experiencing is fairly straightforward (I hope):

 I have two, sometimes three columns in any given record that need to 
 always be unique.  This comprises an ID (the key column), and one 
 (depending on the table, sometimes two) GUIDs which should be unique 
 at all times for the entire table.
 
 I've tried setting the additional columns to be a primary key (which 
 in turn sets them up to be unique when viewed under Schema Indices

 on the MySQL Administrator tool); however this does not give an error 
 (or fail silently with insert ignore) when I insert a duplicate - 
 mySQL seems quite happy to add the duplicate record.
 
 At the moment, I'm running a process in the beginning which simply 
 gathers all guids from the database and compares them as it runs 
 through the data (then adds new ones as it runs).. This is hardly 
 reliable, and also means starting the service would take several hours

 to gather the existing guids at current data levels... almost 
 frightening to think, what will end up happening as the data expands.

It sounds like you need a separate primary key and unique index:

create table t (
id int not null,
guid char(32) not null,
unique key (guid),
primary key(id)
);

Then you can do REPLACE or IGNORE with the LOAD DATA INFILE.  I can't
tell if you are actually using LOAD DATA INFILE or if your bulk load 
is a big INSERT statement.  If you're using an INSERT with multiple
VALUES() sections, you can also use ON DUPLICATE KEY UPDATE.

I agree the current strategy won't hold up well over time.

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Two/more seperately unique columns in a table?

2007-08-31 Thread Esbach, Brandon
Hi all,

I'm guessing this is a fairly common question, but I've drawn a blank so
far with all workarounds possible.
Here's the rough scenario:
We have data streaming in constantly from various offsite locations.
This comprises of several details: a session, a unit of that session,
and a measurement of that unit.
Data is captured and transferred on a timed process, so often data is
repeated in the data transfer packets.  This repeating is unavoidable as
the software used to capture dumps this data for each measurement and
each unit for the session it's working on.

Due to the volume, a bulk update is done using values() with an insert
statement.

Unfortunately, often there are repeats of either session, unit, or
measurement (as there could be new measurement or unit for the capture
that needs to be associated to the session).
The problem I've been experiencing is fairly straightforward (I hope): I
have two, sometimes three columns in any given record that need to
always be unique.  This comprises an ID (the key column), and one
(depending on the table, sometimes two) GUIDs which should be unique at
all times for the entire table.

I've tried setting the additional columns to be a primary key (which in
turn sets them up to be unique when viewed under Schema Indices on
the MySQL Administrator tool); however this does not give an error (or
fail silently with insert ignore) when I insert a duplicate - mySQL
seems quite happy to add the duplicate record.

At the moment, I'm running a process in the beginning which simply
gathers all guids from the database and compares them as it runs through
the data (then adds new ones as it runs).. This is hardly reliable, and
also means starting the service would take several hours to gather the
existing guids at current data levels... almost frightening to think,
what will end up happening as the data expands.

I'm hoping that I'm just missing something really daft and that there is
a much easier way to ensure several columns are always unique in my
table, while still benefitting from the bulk load insert?


Details on installation used:
MySQL version:   v 4.1.14 (scheduled for upgrade in 2008)
OS: Windows Server 2003 (std edition)
Memory: 2GB




RE: Two/more seperately unique columns in a table?

2007-08-31 Thread Esbach, Brandon
Hi, and thanks Baron;
I should have been a bit clearer on the bulk insert - I am using a bulk
insert statement, as you assumed.
I'll put this onto the db server and check, I think that's a more future
proof method.  Will this affect any of my linked tables (linked via the
row's primary key(id))?

-Original Message-
From: Baron Schwartz [mailto:[EMAIL PROTECTED] 
Sent: 31 August 2007 14:28
To: Esbach, Brandon
Cc: MySQL User Group
Subject: Re: Two/more seperately unique columns in a table?

Hi,

Esbach, Brandon wrote:
 Hi all,
 
 I'm guessing this is a fairly common question, but I've drawn a blank 
 so far with all workarounds possible.
 Here's the rough scenario:
 We have data streaming in constantly from various offsite locations.
 This comprises of several details: a session, a unit of that session, 
 and a measurement of that unit.
 Data is captured and transferred on a timed process, so often data is 
 repeated in the data transfer packets.  This repeating is unavoidable 
 as the software used to capture dumps this data for each measurement 
 and each unit for the session it's working on.
 
 Due to the volume, a bulk update is done using values() with an 
 insert statement.
 
 Unfortunately, often there are repeats of either session, unit, or 
 measurement (as there could be new measurement or unit for the capture

 that needs to be associated to the session).
 The problem I've been experiencing is fairly straightforward (I hope):

 I have two, sometimes three columns in any given record that need to 
 always be unique.  This comprises an ID (the key column), and one 
 (depending on the table, sometimes two) GUIDs which should be unique 
 at all times for the entire table.
 
 I've tried setting the additional columns to be a primary key (which 
 in turn sets them up to be unique when viewed under Schema Indices

 on the MySQL Administrator tool); however this does not give an error 
 (or fail silently with insert ignore) when I insert a duplicate - 
 mySQL seems quite happy to add the duplicate record.
 
 At the moment, I'm running a process in the beginning which simply 
 gathers all guids from the database and compares them as it runs 
 through the data (then adds new ones as it runs).. This is hardly 
 reliable, and also means starting the service would take several hours

 to gather the existing guids at current data levels... almost 
 frightening to think, what will end up happening as the data expands.

It sounds like you need a separate primary key and unique index:

create table t (
id int not null,
guid char(32) not null,
unique key (guid),
primary key(id)
);

Then you can do REPLACE or IGNORE with the LOAD DATA INFILE.  I can't
tell if you are actually using LOAD DATA INFILE or if your bulk load 
is a big INSERT statement.  If you're using an INSERT with multiple
VALUES() sections, you can also use ON DUPLICATE KEY UPDATE.

I agree the current strategy won't hold up well over time.

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]