On Wed, 2014-06-18 at 18:08 -0600, Rob Sargent wrote:
> On 06/18/2014 05:47 PM, Jason Long wrote:
>
>
> > I have a large table of access logs to an application.
> >
> > I want is to find all rows that overlap startdate and enddate with any
> > other rows.
I have a large table of access logs to an application.
I want is to find all rows that overlap startdate and enddate with any
other rows.
The query below seems to work, but does not finish unless I specify a
single id.
select distinct a1.id
from t_access a1,
t_access a2
where tstzr
I would like for corresponding records in t_a to be deleted when I
delete a record from t_b. This deletes from t_b when I delete from t_a,
but not the other way around. I am unable to create a foreign key
constraint on t_a because this table holds records from several other
tables. I added a simp
On Thu, 2013-06-20 at 16:22 -0700, David Johnston wrote:
> Jason Long-2 wrote
> >> Jason Long-2 wrote
> >
> >
> > There is a unique constraint on the real price table. I hadn't thought
> > of how I will enforce the constraint across two tables
Thank you. I will give it a try. I have never used WITH before.
Thank you for the tips.
On Thu, 2013-06-20 at 16:05 -0700, David Johnston wrote:
> Jason Long-2 wrote
> > Can someone suggest the easiest way to compare the results from two
> > queries to make sure the
On Thu, 2013-06-20 at 15:37 -0700, David Johnston wrote:
> Jason Long-2 wrote
> > David,
> >
> > Thank you very much for your response.
> > Below is a script that will reproduce the problem with comments
> > included.
> >
> >
> >
&
Can someone suggest the easiest way to compare the results from two
queries to make sure they are identical?
I am rewriting a large number of views and I want to make sure that
nothing is changes in the results.
Something like
select compare_results('select * from v_old', 'select * from v_new')
/*
where (pwoa.id is not null and pbt.id is not null) or
(pwoa.id is null and pbt.id is null)
*/
order by it.id;
/***/
On Thu, 2013-06-20 at 12:29 -0700, David Johnston wrote:
> Jason Long-2 wrote
> > I am having some proble
I am having some problems moving a column to another table and fixing
some views that rely on it. I want to move the area_id column from
t_offerprice_pipe to t_offerprice and then left join the results.
When I have only one table I get the correct results. area_id is
currently in the t_offerpric
In order to do some complex calculations I have joined several views.
Each view could join quite a few tables.
The user is allowed to filter the results with several multi-select
input fields and this is used in the query as where a.id in
(:listOfIds).
This works fine if the user does not filter
I upgraded to v9.1.2 a couple of days ago.
Some of my queries are taking noticeably longer. Some of the slower ones
would max out at about 45 seconds before. Now they are maxing out at
almost 2 minutes.
The only change I made to postgresql.conf was geqo_effort = 10 and this was
long before this
productions will fail
due to lack of specific parts.
On Mon, Nov 21, 2011 at 12:23 PM, Henry Drexler wrote:
> google 'weeks of supply'
>
>
> On Mon, Nov 21, 2011 at 1:18 PM, Jason Long <
> mailing.li...@octgsoftware.com> wrote:
>
>> I have a custom
I have a custom inventory system that runs on PG 9.1. I realize this is
not a postgres specify question, but I respect the skills of the members of
this list and was hoping for some general advice.
The system is not based on any ERP and was built from scratch.
My customer requested some supply f
On Thu, 2011-09-29 at 22:54 -0600, Ben Chobot wrote:
> On Sep 29, 2011, at 4:57 PM, Jason Long wrote:
>
>
>
> > I thought I had read somewhere that Postges could ignore a join if
> > it
> > was not necessary because there were no columns from the table or
> >
I started an application around 5 years ago using Hibernate and writing
my queries in HQL.
The primary search screen has many options to filter and joins many
tables and views.
As the application grew the SQL Hibernate is generating is out of hand
and needs optimization.
As with other parts of t
On Wed, 2011-09-28 at 08:52 +0200, Guillaume Lelarge wrote:
> On Wed, 2011-09-28 at 09:04 +0800, Craig Ringer wrote:
> > On 09/28/2011 04:51 AM, Jason Long wrote:
> > > I have an application with a couple hundred views and a couple hundred
> > > tables.
> > >
&g
I have an application with a couple hundred views and a couple hundred
tables.
Is there some way I can find out which views have been accessed in the
last 6 months or so? Or some way to log this?
I know there are views and tables that are no longer in used by my
application and I am looking for
On Fri, 2011-04-08 at 14:45 -0400, Tom Lane wrote:
> Jason Long writes:
> > I am using 9.0.3 and the only setting I have changed is
> > geqo_effort = 10
>
> > One of the joins is a view join.
>
> Ah. The explain shows there are actually nine base tables in th
The main search screen of my application has pagination.
I am basically running 3 queries with the same where clause.
1. Totals for the entire results(not just the number of rows on the
first page)
a. <300 ms
2. Subset of the total records for one page.
a. 1-2 sec
3. Count of the tot
rds for that page in an optimized fashion? I
have no problem using
a widow query or a Postgres specific feature as my app only runs on
Postgres.
--
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.octgsoftware.com
HJBug Founder
I recently upgraded to JBoss AS 6.0.0.Final which includes a newer
version of Hibernate.
Previously the Postgres dialect was using a comma, but now is is using
cross join.
In order do to the migration I had to override the cross join operator
to a comma in HIbernate so it would generate the same
On Mon, 2010-11-08 at 16:23 -0700, Scott Marlowe wrote:
> On Mon, Nov 8, 2010 at 3:42 PM, Jason Long wrote:
> > On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
> >> On Mon, Nov 8, 2010 at 11:50 AM, Jason Long wrote:
> >> > I currently have Postgres 9.0
On Mon, 2010-11-08 at 16:23 -0700, Scott Marlowe wrote:
> On Mon, Nov 8, 2010 at 3:42 PM, Jason Long wrote:
> > On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
> >> On Mon, Nov 8, 2010 at 11:50 AM, Jason Long wrote:
> >> > I currently have Postgres 9.0
On Mon, 2010-11-08 at 16:23 -0700, Scott Marlowe wrote:
> On Mon, Nov 8, 2010 at 3:42 PM, Jason Long wrote:
> > On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
> >> On Mon, Nov 8, 2010 at 11:50 AM, Jason Long wrote:
> >> > I currently have Postgres 9.0
On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote:
> On Mon, Nov 8, 2010 at 11:50 AM, Jason Long wrote:
> > I currently have Postgres 9.0 install after an upgrade. My database is
> > relatively small, but complex. The dump is about 90MB.
> >
> > Every night when
On Mon, 2010-11-08 at 13:28 -0800, John R Pierce wrote:
> On 11/08/10 10:50 AM, Jason Long wrote:
> > I currently have Postgres 9.0 install after an upgrade. My database is
> > relatively small, but complex. The dump is about 90MB.
> >
> > Every night when there is no
On Mon, 2010-11-08 at 13:28 -0800, John R Pierce wrote:
> On 11/08/10 10:50 AM, Jason Long wrote:
> > I currently have Postgres 9.0 install after an upgrade. My database is
> > relatively small, but complex. The dump is about 90MB.
> >
> > Every night when there is no
the
default.
--
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.octgsoftware.com
HJBug Founder and President
http://www.hjbug.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
I currently have Postgres 9.0 install after an upgrade. My database is
relatively small, but complex. The dump is about 90MB.
Every night when there is no activity I do a full vacuum, a reindex, and
then dump a nightly backup.
Is this optimal with regards to performance? autovacuum is set to t
I use Centos for production and Fedora for development and I am very
happy with both. Especially Centos as I have never had an update break
anything.
On Fri, 2010-11-05 at 09:50 -0400, David Siebert wrote:
> I would say that if you pick any of the big four you will be fine.
> CentOS
> Ubuntu Serv
I have several databases running under on Postgres 9.0 installation in
production. Is there a way to specify with which onces will be replicated
on another server or must all of them be included?
Is there any way to have Postgres preserve the case of tables and column
names in queries without having to use quotes for columns with mixed case?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
Gregory Stark wrote:
"Daniel Verite" writes:
Gregory Stark wrote:
Is it the hierarchical query ability you're looking for or pivot?
The former we are actually getting in 8.4.
AFAIK even in systems with pivot you still have to
declare a fixed list of columns in advance anyw
Daniel Verite wrote:
Gregory Stark wrote:
Is it the hierarchical query ability you're looking for or pivot?
The former we are actually getting in 8.4.
AFAIK even in systems with pivot you still have to
declare a fixed list of columns in advance anyways.
Do you see a system where it works di
Gregory Stark wrote:
Jason Long writes:
Richard Huxton wrote:
1. Case-folding on column-names.
Quoting is a PITA sometimes when you're transferring from a different
DBMS. Be nice to have a "true_case_insensitive=on" flag.
I was just wishing for thi
Richard Huxton wrote:
Gregory Stark wrote:
I'm putting together a talk on "PostgreSQL Pet Peeves" for discussion at
FOSDEM 2009 this year.
Hmm - three "niggles" things leap to mind.
1. Case-folding on column-names.
Quoting is a PITA sometimes when you're transferring from a different
Fujii Masao wrote:
Hi,
On Wed, Jan 28, 2009 at 4:28 AM, Gabi Julien wrote:
Yes, the logs are shipped every minute but the recevory is 3 or 4 times
longer.
Are you disabling full_page_writes? It may slow down recovery several times.
Thanks I will take a look at it. Also, I came a
Adrian Klaver wrote:
On Thursday 22 January 2009 8:16:46 am Alvaro Herrera wrote:
Grzegorz Jaśkiewicz escribió:
test2=# insert into dupa(a) select 'current_timestamp' from
generate_series(1,100);
ERROR: date/time value "current" is no longer supported
LINE 1: insert into dupa(a) select
David Wilson wrote:
On Fri, Jan 16, 2009 at 3:27 PM, Jason Long
wrote:
I just tried it by sending text only instead of text and html. We will see
if it goes through this time.
Other than that do you see anything weird about my email?
Still nothing. Do you have webspace you could
Scott Marlowe wrote:
On Thu, Jan 15, 2009 at 11:04 PM, Jeff Davis wrote:
On Thu, 2009-01-15 at 19:37 -0600, Jason Long wrote:
I have not looked into the detail of the explain, and I do see visually
that very different plans are being chosen.
It would help to share these
?
On Fri, Jan 16, 2009 at 1:19 PM, Jason Long
wrote:
Scott Marlowe wrote:
On Thu, Jan 15, 2009 at 11:04 PM, Jeff Davis wrote:
On Thu, 2009-01-15 at 19:37 -0600, Jason Long wrote:
I have not looked into the detail of the explain, and I do see visually
that very different plans are being
Jeff Davis wrote:
On Fri, 2009-01-16 at 08:43 -0600, Jason Long wrote:
The numbers in the table names are due to hibernate generating the
query.
Well, that's what auto-generated schemas and queries do, I guess.
The schema is not auto generated. It evolved as I creat
I am having a serious problem with my application and I hope someone can
help me out.
This could not happen at a worse time as a consulting firm is at my
clients to recommend a new financial system and the inventory
system(which I developed) keeps locking up.
I have a dynamically built query t
Steven Lembark wrote:
I would like to use PSQLFS(http://www.edlsystems.com/psqlfs/)
to store 100 GB of images in PostgreSQL.
Once they are in there I can deal with them. My main purpose is to use
rsync to get the files into the database.
Is there a better way to load 20,000 plus files reliably
Scott Marlowe wrote:
On Thu, Jan 15, 2009 at 1:28 PM, Jason Long
wrote:
A faster server.
Well the sever is plenty fast. It has 2 quad core 1600MHz FSB 3.0 GHz Xeon
5472 CPUs and a very light workload.
A few things.
That doesn't make a fast server. The disk i/o subsystem ma
Steve Atkins wrote:
On Jan 15, 2009, at 12:32 PM, Jason Long wrote:
I don't mean to be a pain either and I mean no disrespect to anyone
on this list in the following comments.
However, this is about the most anal list ever.
I see so many emails on here about people complaining regardin
nt to point point out my statement from the code. I
also did not realize it appeared that large to you.
My res is 2560X1600 so it didn't look that large.
I apologize.
*Just out of curiosity, why are you so apposed to HTML in a email?*
Raymond O'Donnell wrote:
On 15/01/2009 20:06, Jason
Alan Hodgson wrote:
On Thursday 15 January 2009, Jason Long
wrote:
*I am attempting to vacuum and reindex my database. It keeps timing
out. See commands and last part of output below. The vacuum or reindex
only takes a short time to complete normally because the database it
less than 50
*I am attempting to vacuum and reindex my database. It keeps timing
out. See commands and last part of output below. The vacuum or reindex
only takes a short time to complete normally because the database it
less than 50 mb. I have the query timeout set to 2 minutes, but I do
not know if th
Reid Thompson wrote:
On Tue, 2009-01-13 at 18:22 -0600, Jason Long wrote:
Never used Python or Perl. I use primarily Java. I was thinking of
doing something like
INSERT INTO pictures (filename,data) VALUES
('filename','/path/to/my/image/img0009.jpg');
But, this synta
Sam Mason wrote:
On Wed, Jan 14, 2009 at 12:56:42AM +, Sam Mason wrote:
If you Java you'd probably be better off using it
Hum, it's getting late. That should be "If you *know* Java"! Bed time
for me I think!
Sam
Thanks for the advice. I will probably go with Java. In th
Sam Mason wrote:
On Tue, Jan 13, 2009 at 06:22:34PM -0600, Jason Long wrote:
Sam Mason wrote:
You'd need to generate the SQL somehow; if you know python it's probably
a pretty easy 20 or 30 lines of code to get this working.
*Never used Python or Perl. I use prim
Sam Mason wrote:
On Tue, Jan 13, 2009 at 03:28:18PM -0600, Jason Long wrote:
Steve Atkins wrote:
On Jan 13, 2009, at 10:34 AM, Jason Long wrote:
I would like to use PSQLFS(http://www.edlsystems.com/psqlfs/)
to store 100 GB of images in PostgreSQL.
Is there a better way to load
Steve Atkins wrote:
On Jan 13, 2009, at 10:34 AM, Jason Long wrote:
I would like to use PSQLFS(http://www.edlsystems.com/psqlfs/)
to store 100 GB of images in PostgreSQL.
Once they are in there I can deal with them. My main purpose is to
use rsync to get the files into the database.
Is
, my photos.
Alan Hodgson wrote:
On Tuesday 13 January 2009, Jason Long
wrote:
I would like to use PSQLFS(http://www.edlsystems.com/psqlfs/)
to store 100 GB of images in PostgreSQL.
Once they are in there I can deal with them. My main purpose is to use
rsync to get the files into the
I would like to use PSQLFS(http://www.edlsystems.com/psqlfs/)
to store 100 GB of images in PostgreSQL.
Once they are in there I can deal with them. My main purpose is to use
rsync to get the files into the database.
Is there a better way to load 20,000 plus files reliably into Postgres?
--
S
I have an inventory system based on PostgreSQL 8.3.5, JBoss,
Hibernate..
I have a query builder that lets users filter data in a fairly complex way.
For some reason the search gets out of control and consumes all CPU.
I set my statement timeout to 2 minutes and this keeps the system from
ginally shipped.
My code has done this occasionally and users can override the inventory.
Basically I would rather the application throw an error than let this
number become unbalanced.
--
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineeri
Richard Broersma wrote:
On Wed, Dec 10, 2008 at 3:58 PM, Jason Long
<[EMAIL PROTECTED]> wrote:
I need to add some complex constraints at the DB.
These will involve several tables.
What is the best approach for this?
Well ANSI-SQL provides the CREATE ASSERTION for this p
originally shipped.
My code has done this occasionally and users can override the inventory.
Basically I would rather the application throw an error than let this
number become unbalanced.
--
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineeri
Robert Treat wrote:
On Tuesday 02 December 2008 07:11:02 A. Kretschmer wrote:
am Tue, dem 02.12.2008, um 16:45:16 +0500 mailte IPS folgendes:
I have certain jobs to be executed automatically at a given interval of
time in the postgre SQL database. Is their any utility/feature availabl
Scott Marlowe wrote:
On Mon, Dec 1, 2008 at 4:10 PM, Jason Long
<[EMAIL PROTECTED]> wrote:
Greg Smith wrote:
I wonder if I'm the only one who just saved a copy of that post for
reference in case it gets forcibly removed...
Recently I was thinking about whether I had enough
Greg Smith wrote:
I wonder if I'm the only one who just saved a copy of that post for
reference in case it gets forcibly removed...
Recently I was thinking about whether I had enough material to warrant
a 2008 update to "Why PostgreSQL instead of MySQL"; who would have
guessed that Monty woul
Tom Lane wrote:
Jason Long <[EMAIL PROTECTED]> writes:
I got this error
/usr/sbin/sendmail: Permission denied
So I guess I need to allow the use of sendmail.
How is postgres running the command different from my doing it as the
postgres user or cron running as the postgre
Tom Lane wrote:
Jason Long <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
If that is what's happening, you'll find "avc denied" messages in the
system log that correlate to the archive failures.
*I did not see anything like this in my logs.*
Tom Lane wrote:
Jason Long <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
If that is what's happening, you'll find "avc denied" messages in the
system log that correlate to the archive failures.
*I did not see anything like this in my logs.*
Tom Lane wrote:
I wrote:
That's just bizarre. The permissions on the script itself seem to be
fine, so the only theory that comes to mind is the server doesn't have
search (x) permission on one of the containing directory levels ...
Oh, wait, I bet I've got it: you're using a SELinux-
e any advice on what permission I need to set in order
for this command to run.
--
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.octgsoftware.com
HJBug Founder and President
http://www.hjbug.com
Greg Smith wrote:
On Mon, 3 Nov 2008, Jason Long wrote:
For some reason Postgres is pegging my CPU and I can barely log on to
reboot the machine.
Take a look at pg_stat_activity when this happens to see what's going
on. Also, try running "top -c" to see what is going on (t
Kris Jurka wrote:
On Mon, 3 Nov 2008, Jason Long wrote:
*Would someone please comment on the status of setQueryTimeout in the
JDBC driver? Is there any workaround if this is still not implemented?*
setQueryTimeout is not implemented, the workaround is to manually
issue SET
In order to keep my application from freezing up when a query pegs my
CPU I set statement_timeout=12, but I read in the manual
"Setting statement_timeout in postgresql.conf is not recommended because
it affects all sessions."
I am used JDBC exclusively for the applicatoin and I read here
Scott Marlowe wrote:
On Mon, Nov 3, 2008 at 12:25 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Mon, Nov 3, 2008 at 11:30 AM, Jason Long
<[EMAIL PROTECTED]> wrote:
I am running PostgreSQL 8.3.4 on Centos 5.2 with a single Xeon 5472, 1600
MHz, 12 MB cache, 3.0 GHz quad co
. While
there are relatively live few users the data is extremely important and
the users will not wait for me to see what is wrong. They demand
immediate resolution and the best I can do is reboot.
--
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical
Greg Smith wrote:
On Thu, 30 Oct 2008, Tom Lane wrote:
The real reason not to put that functionality into core (or even
contrib) is that it's a stopgap kluge. What the people who want this
functionality *really* want is continuous (streaming) log-shipping, not
WAL-segment-at-a-time shipping.
Kyle Cordes wrote:
Greg Smith wrote:
there's no chance it can accidentally look like a valid segment. But
when an existing segment is recycled, it gets a new header and that's
it--the rest of the 16MB is still left behind from whatever was in
that segment before. That means that even if you
Greg Smith wrote:
On Thu, 30 Oct 2008, Joshua D. Drake wrote:
This reminds me yet again that pg_clearxlogtail should probably get
added
to the next commitfest for inclusion into 8.4; it's really essential
for a
WAN-based PITR setup and it would be nice to include it with the
distribution.
W
?
3. If I reduce the size how will this work if I try to save a document
that is larger than the WAL size?
Any other suggestions would be most welcome.
*
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.octgsoftware.com
HJBug
Richard Broersma wrote:
On Fri, Oct 17, 2008 at 12:32 AM, searchelite <[EMAIL PROTECTED]> wrote:
I have an sql script consists of insert statement data. I want to insert
every row of data every one minute. How can i do that using batch file in
windows
Take a look at your windows sched
I am not fond of this approach either. I never find myself replying
directly to the poster.
I actually greatly prefer forums which email me a copy of every post
with a nice link to the original thread. 95% of the time I do not even
need to use the link. The latest posting is enough.
This
Robert Treat wrote:
On Wednesday 24 September 2008 12:34:17 Jason Long wrote:
Richard Huxton wrote:
Jason Long wrote:
I need to set up master vs slave replication.
My use case is quite simple. I need to back up a small but fairly
complex(30 MB data, 175 tables) DB remotely
Richard Huxton wrote:
Jason Long wrote:
I need to set up master vs slave replication.
My use case is quite simple. I need to back up a small but fairly
complex(30 MB data, 175 tables) DB remotely over T1 and be able to
switch to that if the main server fails. The switch can even be a
I need to set up master vs slave replication.
My use case is quite simple. I need to back up a small but fairly
complex(30 MB data, 175 tables) DB remotely over T1 and be able to
switch to that if the main server fails. The switch can even be a
script run manually.
Can someone either comme
that can get this done for me quickly and
economically.
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.octgsoftware.com
HJBug Founder and President
http://www.hjbug.com
--
Sent via pgsql-general mailing list (pgsql-general
Thanks for the advice. I will keep playing with it. Can someone here
comment on EnterpriseDB or another companies paid support? I may
consider this to quickly improve my performance.
Scott Marlowe wrote:
Have you run analyze on the tables? bumped up default stats and re-run analyze?
Best way
I have a query that takes 2 sec if I run it from a freshly restored
dump. If I run a full vacuum on the database it then takes 30 seconds.
Would someone please comment as to why I would see a 15x slow down by
only vacuuming the DB?
I am using 8.3.1
--
Sent via pgsql-general mailing list (pg
I have a query that takes 2.5 sec if I run it from a freshly restored
dump. If I run a full vacuum on the database it then takes 30 seconds.
Would someone please comment as to why I would see over a 10x slow down
by only vacuuming the DB?
I am using 8.3.1
--
Sent via pgsql-general mailing l
uot; -> Seq Scan on
t_state popipe1_1_ (cost=0.00..330.83 rows=5727 width=15) (actual
time=0.087..8.880 rows=5732 loops=3174)"
"Filter: (NOT spec)"
"
maybe someone can help me out.
Joshua D. Drake wrote:
On Wed, 2008-06-04 at 17:02 -0500, Jason Long wrote:
I have a query that takes 2 sec if I run it from a freshly restored
dump. If I run a full vacuum on the database it then takes 30 seconds.
If you run it a second
I have a query that takes 2 sec if I run it from a freshly restored
dump. If I run a full vacuum on the database it then takes 30 seconds.
Would someone please comment as to why I would see a 15x slow down by
only vacuuming the DB? Reindexing does not help, and a full vacuum was
run just prior
I have a client that wants a disaster recovery plan put into place. What is
the easiest way to do a hands free install of postgresql on a window box?
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.supernovasoftware.com
Thanks that is basically what I was looking for I will investigate further.
I appreciate your response.
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.supernovasoftware.com
HJBUG Founder and President
http://www.hjbug.com
I was hoping for something a bit more automatic with less maintenance from
me. Thank you for your reply.
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.supernovasoftware.com
HJBUG Founder and President
http://www.hjbug.com
improving performance is this situation?
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.supernovasoftware.com
HJBUG Founder and President
http://www.hjbug.com
--
No virus found in this outgoing message.
Checked by
Process p = runtime.exec(cmd);
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.supernovasoftware.com
_keys'')::int,1)
as g(s)';
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.supernovasoftware.com
From:
[EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jason Long
Sent: Thursday,
?
Any assistance would be greatly appreciated.
Thank you for your time,
Jason Long
CEO and Chief Software Engineer
BS Physics, MS Chemical Engineering
http://www.supernovasoftware.com
96 matches
Mail list logo