isnt being held up by repetitive queries
>> (ie like the side "products table" that appears on every web page).
>> I'm pretty sure I cached the site pretty well, but want to make sure
>> that I didn't miss anything.
>>
>> Is there some sort of tool
2015 at 10:37 PM, Steve Quezadas
wrote:
> I want to make sure my caching system is working properly and I want
> to make sure my mysql server isnt being held up by repetitive queries
> (ie like the side "products table" that appears on every web page).
> I'm pretty su
o make sure my mysql server isnt being held up by repetitive queries
>> (ie like the side "products table" that appears on every web page).
>> I'm pretty sure I cached the site pretty well, but want to make sure
>> that I didn't miss anything.
>>
&
Am 18.05.2015 um 23:37 schrieb Steve Quezadas:
I want to make sure my caching system is working properly and I want
to make sure my mysql server isnt being held up by repetitive queries
(ie like the side "products table" that appears on every web page).
I'm pretty sure I cached
I want to make sure my caching system is working properly and I want
to make sure my mysql server isnt being held up by repetitive queries
(ie like the side "products table" that appears on every web page).
I'm pretty sure I cached the site pretty well, but want to make sure
tha
Hello Surya,
Part of the problem may be that you are so focused on the details that
might have lost sight of the purpose.
On 7/12/2014 8:24 AM, Surya Savarika wrote:
Hi,
I have two query series that I wonder whether they can be compacted
into a single query:
FIRST QUERY SERIES
cursor
Hi,
I have two query series that I wonder whether they can be compacted
into a single query:
FIRST QUERY SERIES
cursor.execute("""select d.ID, d.Name, b.SupersetID from
books_data as d join books as b on d.ID=b.BooksDataID2
where b.BooksDataID!=b.BooksDataID2 and b.Religio
Hi list,
I have some problems with INSERT INTO and UPDATE queries on a big table.
Let me put the code and explain it ...
I have copied the create code of the table. This table has more than
1500 rows.
Create Table: CREATE TABLE `radacct` (
`RadAcctId` bigint(21) NOT NULL AUTO_INCREMENT
?
I was about to comment that it looks like queries generated by an ORM or
connector. It looks like from your version string you have an MySQL
enterprise, may I suggest creating a ticket with support?
Regarding your most recent reply:
All the "SHOW FULL COLUMN" queries that we
On 6/3/2014 4:47 PM, Johan De Meersman wrote:
- Original Message -
From: "Johan De Meersman"
Subject: Re: SHOW FULL COLUMNS QUERIES hogging my CPU
In any case, this is nothing that can be fixed on the database level.
I may or may not have to swallow that :-p
I've b
- Original Message -
> From: "Johan De Meersman"
> Subject: Re: SHOW FULL COLUMNS QUERIES hogging my CPU
>
> In any case, this is nothing that can be fixed on the database level.
I may or may not have to swallow that :-p
I've been hammering a munin plugin th
- Original Message -
> From: "Jatin Davey"
> Subject: Re: SHOW FULL COLUMNS QUERIES hogging my CPU
>
> Certain part of our code uses DataNucleas while other parts of the code
A "data persistence product"... there's your problem.
Persisting objec
to comment that it looks like queries generated by an ORM or
connector. It looks like from your version string you have an MySQL
enterprise, may I suggest creating a ticket with support?
Regarding your most recent reply:
> All the "SHOW FULL COLUMN" queries that we do on the resp
ion during its
operation. I had collected the queries for about 4 hours.
Ran some scripts on the number of queries being sent to
the databases.
The query file was a whopping 4 GB is size. Upon analyzing
the queries i found that there were
All the "SHOW FULL COLUMN" queries that we do on the respective tables
are very small tables. They hardly cross 50 rows. Hence that is the
reason whenever these queries are made i can see high cpu usage in
%user_time. If it were very large tables then the cpu would be spending
lot
some help on this forum.
Basically i got a query dump of my application during its
operation. I had collected the queries for about 4 hours.
Ran some scripts on the number of queries being sent to
the databases.
The query file was a wh
gt;>> this forum.
>>>
>>> Basically i got a query dump of my application during its operation. I
>>> had collected the queries for about 4 hours.
>>> Ran some scripts on the number of queries being sent to the databases.
>>>
>>> The query f
The advice to 'avoid LIKE in general' is a little strong. LIKE is
very useful and does not always cause inefficient queries, although
the possibility is there.
However, there is one form which must be avoided at all costs: the one
where the glob-text matcher is the first character in t
On 6/2/2014 7:18 PM, Reindl Harald wrote:
Am 02.06.2014 15:35, schrieb Jatin Davey:
I am no expert with mysql and databases. Hence seeking out some help on this
forum.
Basically i got a query dump of my application during its operation. I had
collected the queries for about 4 hours.
Ran
Am 02.06.2014 15:35, schrieb Jatin Davey:
> I am no expert with mysql and databases. Hence seeking out some help on this
> forum.
>
> Basically i got a query dump of my application during its operation. I had
> collected the queries for about 4 hours.
> Ran some script
Hi All
I am no expert with mysql and databases. Hence seeking out some help on
this forum.
Basically i got a query dump of my application during its operation. I
had collected the queries for about 4 hours. Ran some scripts on the
number of queries being sent to the databases.
The query
> -Original Message-
> From: Vikas Shukla [mailto:myfriendvi...@gmail.com]
> Sent: Thursday, May 30, 2013 7:19 PM
> To: Robinson, Eric; mysql@lists.mysql.com
> Subject: RE: Are There Slow Queries that Don't Show in the
> Slow Query Logs?
>
> Hi,
>
>
seconds to execute.
Sent from my Windows Phone From: Robinson, Eric
Sent: 31-05-2013 03:48
To: mysql@lists.mysql.com
Subject: Are There Slow Queries that Don't Show in the Slow Query Logs?
As everyone knows, with MyISAM, queries and inserts can lock tables
and force other queries to wait in a queue.
Richard, there is more to a system than number of queries.
Please post these in a new thread on http://forums.mysql.com/list.php?24 :
SHOW GLOBAL STATUS;
SHOW VARIABLES;
Ram size
I will do some analysis and provide my opinion.
> -Original Message-
> From: Manuel Aro
2013/4/4
> 2013/04/04 22:40 +0200, Manuel Arostegui
> You can start with show innodb status;
>
> It is now
> show engine innodb status
Yep, sorry, not used to it just yet :-)
--
Manuel Aróstegui
Systems Team
tuenti.com
2013/04/04 22:40 +0200, Manuel Arostegui
You can start with show innodb status;
It is now
show engine innodb status
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
2013/4/4 Richard Reina
> I am looking to spec out hardware for a new database server. I figured
> a good starting point would be to find out how much usage my current
> server is getting. It just a local machine that runs mysql and is
> queried by a few users here in the office. Is there a way th
I am looking to spec out hardware for a new database server. I figured
a good starting point would be to find out how much usage my current
server is getting. It just a local machine that runs mysql and is
queried by a few users here in the office. Is there a way that mysql
can tell me info about i
. To obtain some data. During the process I query around 10 times
other table per ISN.
e. Here is the problem. If I have a few files to process (around
3000-4000 lines in total, small array) this steps work fine, good speed.
But If I have big files or a lot of files (more than 10000 lin
STATUS
EXPLAIN SELECT (with substitutions filled in)
> -Original Message-
> From: Andrés Tello [mailto:mr.crip...@gmail.com]
> Sent: Tuesday, October 09, 2012 7:04 AM
> To: Adrián Espinosa Moreno
> Cc: mysql@lists.mysql.com
> Subject: Re: Slow queries / inserts InnoDB
>
> 3000-4000 lines in total, small array) this steps work fine, good speed.
> But If I have big files or a lot of files (more than 1 lines in total,
> big array), this steps are incredibly slow. Queries and inserts are too
> slow. Meaning, one-two inserts per second, while the other cas
cess (around
3000-4000 lines in total, small array) this steps work fine, good speed.
But If I have big files or a lot of files (more than 1 lines in total,
big array), this steps are incredibly slow. Queries and inserts are too
slow. Meaning, one-two inserts per second, while the other cas
2012/06/15 18:14 +0900, Tsubasa Tanaka
try to use `LOAD DATA INFILE' to import from CSV file.
http://dev.mysql.com/doc/refman/5.5/en/load-data.html
"Try" is the operative word: MySQL s character format is _like_ CSV, but not
the same. The treatment of NULL is doubtless the bigg
Database when lot of insert / update queries to execute
>
> hi,
> I am biased on mysql, and hence i am asking this on mysql forum first.
> I am designing a solution which will need me to import from CSV, i am
> using my JAVA code to parse. CSV file has 500K rows, and i need to do
>
am biased on mysql, and hence i am asking this on mysql forum first.
> I am designing a solution which will need me to import from CSV, i am using
> my JAVA code to parse. CSV file has 500K rows, and i need to do it thrice
> an hour, for 10 hours a day.
> The Queries will mainly be
hi,
I am biased on mysql, and hence i am asking this on mysql forum first.
I am designing a solution which will need me to import from CSV, i am using
my JAVA code to parse. CSV file has 500K rows, and i need to do it thrice
an hour, for 10 hours a day.
The Queries will mainly be update but
t; I've got some semi-general questions on the topics in the title. What I'm
> looking for is more in the line of theory than query specifics. I am but a
> poor peasant boy.
>
> What I have is an application that makes heavy use of views. If I
> understand views correctly (
ot), views are representations of queries
themselves. The guy who wrote
the app chose to do updates and joins against the views instead of against the
underlying tables themselves.
I've tuned to meet the gross memory requirements and mysqltuner.pl is saying
that 45% of the joins are withou
I need two fields from two different tables. I could either run two
queries, or a single INNER JOIN query:
$r1=mysql_query("SELECT fruit FROM fruits WHERE userid = 1");
$r2=mysql_query("SELECT beer FROM beers WHERE userid = 1");
--or--
$r=mysql_query("SELECT
yer2='13213' ) group by variation limit 3)
UNION
(SELECT count(gamename) as gname ,variation from mp_gamerecord
where (gmtdate > date_sub(current_timestamp(),interval 90 day))
and (player1='13213' or player2='13213' or player3='13213' or player4=
I was wondering if any one could point out potential problems with the
following query or if there was a better alternative
>From a list of users I want to return all who don't have all the specified
user_profile options or those who do not have at least one preference set to
1. The following quer
At 05:39 PM 2/13/2011, Andre Polykanine wrote:
Hi all,
Hope this question is appropriate here :-).
I've got 4 queries:
$q1=mysql_query("SELECT *FROM`CandidateQuestions`WHERE
`Category`='1' ORDER BY RAND() LIMIT 1");
$q2=mysql_query("SELECT *
Hi all,
Hope this question is appropriate here :-).
I've got 4 queries:
$q1=mysql_query("SELECT *FROM`CandidateQuestions`WHERE
`Category`='1' ORDER BY RAND() LIMIT 1");
$q2=mysql_query("SELECT *FROM`CandidateQuestions`WHERE
`Categor
If you are selecting records within a certain time range that is a subset of
the entire set of data, then indexes which use the timestamp column will be
fine.
More generally: create appropriate indexes to optimize queries.
Although typically, you should design the database to be "co
>> But won't that take just as long as any other queries? Or will it be
speeded up because all the matching records would be adjacent to each other
-- like all at the end?
You can order the result data set by timestamp in descending order, so the
latest will come up first,
opriate time range.
But won't that take just as long as any other queries? Or will it be speeded
up because all the matching records would be adjacent to each other -- like all
at the end?
> Also, if you're parsing files into tab delimited format, you don't need to
> write
[mailto:h...@halblog.com]
Sent: Monday, November 08, 2010 10:18 AM
To: mysql@lists.mysql.com
Subject: Running Queries When INSERTing Data?
I'm redesigning some software that's been in use since 2002. I'll be working
with databases that will start small and grow along the way.
In
Their individual servers will still get the big tab-delimited
file that will still be INSERTed in to their DB line by line. But I'd like to
be able to select from the new data as it comes in, once it's been given a new
number in the Idx field.
Is there any way to run a row of data th
On Thu, Oct 14, 2010 at 3:28 AM, Johan De Meersman wrote:
> That depends on the type of lock. If no lock type is specified, InnDB will
> prefer row locks, while MyISAM will do table locks.
>
> That may help, unless all your queries are trying to access the same rows
> anyway :-)
On Thu, Oct 14, 2010 at 9:19 AM, monloi perez wrote:
> Does this happen if your table is InnoDB?
>
That depends on the type of lock. If no lock type is specified, InnDB will
prefer row locks, while MyISAM will do table locks.
That may help, unless all your queries are trying to acce
Does this happen if your table is InnoDB?
Thanks all,
Mon
From: Claudio Nanni
To: monloi perez
Cc: mysql mailing list
Sent: Thu, October 14, 2010 3:16:38 PM
Subject: Re: How to kill locked queries
Hi Mon,
Killing locked queries is not the first step in
Hi Mon,
Killing locked queries is not the first step in database tuning.
Queries locked for a long time usually depend on slow updates that lock
other updates or selects,
this happen on MyISAM (or table level locking engines).
If you are really sure you want and can without problems kill the
The root cause is another query that has tables locked that your "locked"
queries want. Behind that may be, for example, an inefficient but
often-executed query, high I/O concurrency that has a cumulative slowing
effect, or maybe simply a long-running update that might be better schedu
All,
Is there a mysql configuration to kill queries that have been locked for quite
some time. If there's none what is an alternative approach to kill these locked
queries and what is the root cause of it?
Thanks,
Mon
Raj Shekhar writes:
> One option here might be to use "mysql proxy" as a man-in-the-middle and
> filter out unwanted queries...
This seems more or less the same as what I'm doing now with php.
The same question applies there - what would you look for in your
filter?
--
ight be to use "mysql proxy" as a man-in-the-middle and
filter out unwanted queries. You can find an example on how to do this
with mysql proxy on the mysql forge wiki
<http://forge.mysql.com/tools/tool.php?id=108> (more stuff
http://forge.mysql.com/tools/search.php?t=tag&k=mysq
ther than having
to get back much more and then still have to compute the results
they want.
So far I don't see that my query allowing ,,
etc. is worse in any way than any of the other suggestions, and
I see ways in which it's better than all of them.
So far
> manipulate. Why not provide daily
> -Original Message-
> From: Don Cohen [mailto:don-mysq...@isis.cs3-inc.com]
> Sent: Wednesday, June 16, 2010 2:48 PM
> To: Daevid Vincent
> Cc: mysql@lists.mysql.com
> Subject: RE: opening a server to generalized queries but not "too" far
>
> Daev
tions, filters.
>
> > someone is technically literate to format SQL statements, then just
> give
> > them a read-only account to the mysql (or view) directly. Let them use
> > their own GUI tool like SQLYog or whatever -- it will be far more
> robust
> > tha
> them a read-only account to the mysql (or view) directly. Let them use
> their own GUI tool like SQLYog or whatever -- it will be far more robust
> than anything you can write yourself.
In this case there may be a lot of users but the queries are likely to
be written by a small number.
> -Original Message-
> From: Don Cohen [mailto:don-mysq...@isis.cs3-inc.com]
>
> The http request I have in mind will be something like
> https://server.foo.com?user=john&password=wxyz&;...
> and the resulting query something like
> select ... from table where user=john and ...
> (I w
MySQL doesn't have row level permissions, but this is what VIEWS are for. If
you only want access to specific rows, create a view with that subset of
data. You can create a function (privilege bound) to create the view to make
this more dynamic.
If you want direct access to the database, then you
Adam Alkins writes:
> Sounds like you just want to GRANT access to specific tables (and with
> limited commands), which is exactly what MySQL's privilege system does.
How about this part?
> > Finally, suppose I want to limit access to the table to the rows
> > where col1=value1. If I just add
mmediate primary interest is mysql used from php.
>
> I want to allow web users to make a very wide variety of queries, but
> limited to queries (no updates, redefinitions, etc), and limited to a
> fixed set of tables - let's suppose one table with no joins, and
> perhaps a few
m also interested in answers for other RDBMS's,
and I imagine that details of implementation may matter, but my
immediate primary interest is mysql used from php.
I want to allow web users to make a very wide variety of queries, but
limited to queries (no updates, redefinitions, etc), and
Hello Stephen,
Did u try this ??
mysql> show global variables like '%log_output%';
+---+---+
| Variable_name | Value |
+---+---+
| log_output| FILE |
+---+---+
If only the log_output is FILE, then the slow queries will get logg
*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*
restarted mysqld - no log.
Created in file in /var/log/mysql/
*snip*
-rwxr--r-- 1 mysql mysql 0 May 7 10:33 mysql-slow.log
*snip*
still not writing to the file
I've read
http://dev.mysql.com/doc
At 12:04 PM 5/7/2010, Stephen Sunderlin wrote:
Can't get slow querys to log. Does this not work in myisam?
Sure it does. Have you tried:
slow_query_time = 1
Mike
*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*
restarted mysqld - n
Can't get slow querys to log. Does this not work in myisam?
*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*
restarted mysqld - no log.
Created in file in /var/log/mysql/
*snip*
-rwxr--r-- 1 mysql mysql 0 May 7 10:33 mysql-slow.log
*snip*
What queries, precisely, I can't tell you, but you can have a good idea
about how your cache performs using the stuff in "show global variables;"
and the online manuals about what it all means :)
Look at 'show global variables like %qcache%', for a start.
On Fri, May 7
Can somebody help me with this?
Thanks!
On Thu, May 6, 2010 at 10:39 AM, Darvin Denmian
wrote:
> Hello,
>
> I've activated the query_cache in Mysql with the variable
> "query_cache_limit" value to 1 MB.
> My question is:
>
> How to know what queries wasn
Hello,
I've activated the query_cache in Mysql with the variable
"query_cache_limit" value to 1 MB.
My question is:
How to know what queries wasn't cached because they have exceeded the
value of "query_cache_limit"?
**Sorry for my Brazilian Englihs :(
Thanks!
Hi Ramesh,
As of my knowledge we can only enable "slow query log" globally
Regards,
Aravinth
On Mon, Apr 12, 2010 at 4:01 PM, RaMeSh wrote:
> Hi All
>
> How can I get MySQL to only 'log-slow-queries' on specific databases
> instead
> of globally?
>
> --
> Ramesh
>
Hi All
How can I get MySQL to only 'log-slow-queries' on specific databases instead
of globally?
--
Ramesh
ups that object is linked to. There is
one exception to that rule, and that is, if an object isn't linked to
any groups then it doesn't matter what groups the User is in. Currently
I use two queries to implement these rules. If the Count on the first
query is 0, they access is g
pes try
> --single-transaction since it avoids read locks (according to the man
> pages.
Thats great however ... this type of result was not being exibited
some months ago.. i know the database was grown . It has also happened
that some big queries done against it also cause the same issue. I
thin
On Mon, March 22, 2010 11:08, Andres Salazar wrote:
> Hello,
>
> Everytime i run a mysqldump (mysql-server-5.0.77) all the other
> legitimate queries that are ocurring at that time pretty much sleep
> and build up in the processlist untill I either stop the dump or wait
> for it
Hello,
Everytime i run a mysqldump (mysql-server-5.0.77) all the other
legitimate queries that are ocurring at that time pretty much sleep
and build up in the processlist untill I either stop the dump or wait
for it finish. The moment i do either one i can have about 8-15
queries waiting they all
2010/3/19 Olav Mørkrid
> Dear MySQL forum.
>
> I have performance problems when using "left join x" combined with
> "where x.y is null", in particularily when combining three tables this
> way.
>
With a left join, particularly when you're using *is (not) null*, you can't
use index selecting on y
Dear MySQL forum.
I have performance problems when using "left join x" combined with
"where x.y is null", in particularily when combining three tables this
way.
Please contact me by e-mail if you are familiar with these issues and
know how to eliminate slow queries.
I wou
log will also have sql's which are not using indexes(doing
full
> table scan).
> May be those queries with "ZERO SECOND" run on small table without using
> indexes.
>
> regards
> anandkl
>
> On Tue, Feb 23, 2010 at 2:02 PM,
slow query log will also have sql's which are not using indexes(doing full
table scan).
May be those queries with "ZERO SECOND" run on small table without using
indexes.
regards
anandkl
On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards wrote:
> Hi All
>
>
>
>
is enabled.
I have fixed this now but need to wait for a gap to reboot
again to have it set properly. (have to live with the filename 1 for the
time being.)
I did however find something interesting though, while
looking at the queries being logged
> From: machi...@rdc.co.za
> To: mysql@lists.mysql.com
> Subject: slow queries not being logged
> Date: Tue, 23 Feb 2010 09:59:13 +0200
>
> Good day all
>
>
>
> I hope you can assist me with this one...
>
>
>
>
million
(from 160 million queries).
We wanted to look at these queries to see if it can be
optimised to reduce the amount and went through the whole database restart
routine to enable the slow query log again (they are running version 5.0 so
had to restart
Andy,
On Tue, Feb 9, 2010 at 10:27 AM, andy knasinski wrote:
> I've used the general and slow query log in the past, but I am trying to
> track down some queries from a compiled app that never seem to be hitting
> the DB server.
>
> My guess is that the SQL syntax is bad an
Am 09.02.2010 16:27, schrieb andy knasinski:
I've used the general and slow query log in the past, but I am trying to
track down some queries from a compiled app that never seem to be
hitting the DB server.
My guess is that the SQL syntax is bad and never get executed, but I
don'
I'm not positive if the general log captures all invalid queries but
it does capture at least some.
I was asked the same question a few months back and checking to make
sure that manually issued invalid queries are logged (IIRC).
Could it be that the queries are never even making it t
Unfortunately, I'm using a commercial application and trying to debug
as to why some data does and does not get updated properly.
On Feb 9, 2010, at 2:57 PM, mos wrote:
I do something like that in my compiled application. All SQL queries
are sent to a single procedures and executed
At 09:27 AM 2/9/2010, andy knasinski wrote:
I've used the general and slow query log in the past, but I am trying
to track down some queries from a compiled app that never seem to be
hitting the DB server.
My guess is that the SQL syntax is bad and never get executed, but I
don'
I've used the general and slow query log in the past, but I am trying
to track down some queries from a compiled app that never seem to be
hitting the DB server.
My guess is that the SQL syntax is bad and never get executed, but I
don't see any related queries in the general
MySQL University: Optimizing Queries with EXPLAIN
http://forge.mysql.com/wiki/Optimizing_Queries_with_Explain
This Thursday (February 4th, 14:00 UTC), Morgan Tocker will talk about
Optimizing Queries with Explain. Morgan was a technical instructor at
MySQL and works for Percona today.
For MySQL
On 17 Nov 09, at 10:41, Peter Brawley wrote:
> I often need a pattern where one record refers to the one "before"
it, based on the order of some field.
Some ideas under "Sequences" at http://www.artfulsoftware.com/infotree/queries.php
.
Thanks, Peter! What a marvellous resource!
Yo
depends on the payment, interest rate, but also the previous
record's principle. Someone makes a payment on a loan, which needs to
be entered along with the declining balance, but that depends on the
balance of the previous record.
Quite often, I see this pattern in time series data. Da
Quite often, I see this pattern in time series data. Data is logged
and time-stamped, and many queries depend on the difference in time-
stamps between two consecutive records. For example, milk production
records: with milk goats, if milking is early or late, the amount of
milk is lower o
Cant see anything relevant in the manual.
Strange(?)
Syd
++
Sorry can't remember what version you said you were using; if you have a
version prior to 5.1.29 to log all queries enter the following in the
[mysqld] section of your my.cnf
log = /path/to/logfile/filename.log
Rememb
Sorry can't remember what version you said you were using; if you have a
version prior to 5.1.29 to log all queries enter the following in the
[mysqld] section of your my.cnf
log = /path/to/logfile/filename.log
Remembering that the path you specify must be writeable by the server.
If yo
OK thanks to some help from this list I now have a blank my.cnf file in /etc
And I want to set up logging of all sql queries.
So I have tried:
SET GLOBAL general_log = 'ON';
and/or putting (only) /var/log/mysql/mysql.log
in my.cnf and doing a restart via /etc/init.d
(have a pid fi
Y `dealer`.`FIRMID`
>> )
>> On Mon, Nov 9, 2009 at 10:20 PM, Robin Brady
>> wrote:
>>
>>
>>
>>> I am very new to MySQL and trying to use Navicat Report Builder to format
>>> a
>>> renewal invoice to send to our registrants. The renewal
UM function is fairly basic and can only SUM actual fields
in the database. If I can format a query to compute the sum and create a
data view in the report builder I can put the total for each firm on the
report.
I have 2 separate queries that will compute the total renewal fees for
branches and
1 - 100 of 1296 matches
Mail list logo