___
> > mtt-users mailing list
> > mtt-users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/mtt-users
>
> ___
> mtt-users mailing list
> mtt-users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/mtt-users
--
Josh Hursey
IBM Spectrum MPI Developer
___
mtt-users mailing list
mtt-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/mtt-users
> I'll attend the OMPI developer's meeting next week.
> > > I hope we can talk about it.
> > >
> > > Takahiro Kawashima,
> > > MPI development team,
> > > Fujitsu
>
> ___
> mtt-users mailing list
> mtt-users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/mtt-users
>
--
Josh Hursey
IBM Spectrum MPI Developer
___
mtt-users mailing list
mtt-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/mtt-users
move requires -no- changes to any of your MTT client setups.
Let me know if you have any issues.
-- Josh
On Fri, Oct 21, 2016 at 9:53 PM, Josh Hursey wrote:
> I have taken down the MTT Reporter at mtt.open-mpi.org while we finish up
> the migration. I'll send out another email when
I have taken down the MTT Reporter at mtt.open-mpi.org while we finish up
the migration. I'll send out another email when everything is up and
running again.
On Fri, Oct 21, 2016 at 10:17 AM, Josh Hursey wrote:
> Reminder that the MTT will go offline starting at *Noon US Eastern (11
, Oct 19, 2016 at 10:14 AM, Josh Hursey wrote:
> Based on current estimates we need to extend the window of downtime for
> MTT to 24 hours.
>
> *Start time*: *Fri., Oct. 21, 2016 at Noon US Eastern* (11 am US Central)
> *End time*: *Sat., Oct. 22, 2016 at Noon US Eastern* (estimated
any questions or concerns.
On Tue, Oct 18, 2016 at 10:59 AM, Josh Hursey wrote:
> We are moving this downtime to *Friday, Oct. 21 from 2-5 pm US Eastern*.
>
> We hit a snag with the AWS configuration that we are working through.
>
> On Sun, Oct 16, 2016 at 9:53 AM, Josh Hursey
We are moving this downtime to *Friday, Oct. 21 from 2-5 pm US Eastern*.
We hit a snag with the AWS configuration that we are working through.
On Sun, Oct 16, 2016 at 9:53 AM, Josh Hursey wrote:
> I will announce this on the Open MPI developer's teleconf on Tuesday,
> befo
le to access MTT using
themtt.open-mpi.org URL. No changes are needed in your MTT client setup,
and all permalinks are expected to still work after the move.
Let me know if you have any questions or concerns about the move.
--
Josh Hursey
IBM Spectrum MPI Deve
rs@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/mtt-users
>
--
Josh Hursey
IBM Spectrum MPI Developer
___
mtt-users mailing list
mtt-users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/mtt-users
->>>>>>
>
> Here was my .ini stage:
>
> [Reporter:IUdatabase]
> plugin = IUDatabase
>
> realm = OMPI
> username = intel
> pwfile = /home/common/mttpwd.txt
> platform = bend-rsh
> hostname = rhc00[1-2]
> url = https://mtt.open-mpi.o
Related to this conversation, I am proposing that we as a community try to
cultivate some public tests. See the following link for the start of the
dicussion.
https://www.open-mpi.org/community/lists/devel/2016/05/18997.php
We won't be able to open up the ompi-tests repo to the public. But we mi
I think this is fine. If we do start to organize ourselves for a formal
release then we might want to move to pull requests to keep the branch
stable for a bit, but for now this is ok with me.
The Python client looks like it will be a nice addition. Hopefully, I will
have the REST submission inter
I think that would be good. I won't have any cycles to help until after the
first of the year. We started working towards a release way back when, but
I think we got stuck with the license to package up the graphing library
for the MTT Reporter. We could just remove that feature from the release
si
experience any problems with the new server.
-- Josh
On Fri, Nov 2, 2012 at 9:26 AM, Josh Hursey wrote:
> Reminder that we will be shutting down the MTT submission and reporter
> services this weekend to migrate it to another machine. The MTT
> services will go offline at COB today, and b
Reminder that we will be shutting down the MTT submission and reporter
services this weekend to migrate it to another machine. The MTT
services will go offline at COB today, and be brought back by Monday
morning.
On Wed, Oct 31, 2012 at 7:54 AM, Jeff Squyres wrote:
> *** IF YOU RUN MTT, YOU NEED
This is probably me. I haven't had a chance to do anything about it
yet. Hopefully tomorrow.
I'm running the release branch (I believe), does this option exist for
the release branch yet?
-- Josh
On Oct 30, 2008, at 11:36 AM, Ethan Mallove wrote:
On Wed, Oct/29/2008 09:15:37AM, Ethan Mal
review this patch ?
Regards,
Pasha
Josh Hursey wrote:
Sorry for the delay on this. I probably will not have a chance to
look at it until later this week or early next. Thank you for the
work on the patch.
Cheers,
Josh
On May 12, 2008, at 8:08 AM, Pavel Shamis (Pasha) wrote:
Hi Josh,
I ported
to to
the database.inc. Please review.
Thanks,
Pasha
Josh Hursey wrote:
Pasha,
I'm looking at the patch a bit closer and even though at a high
level the do_pg_connect, do_pg_query, simple_select, and select
functions do the same thing the versions in submit/index.php have
Pasha,
I'm looking at the patch a bit closer and even though at a high level
the do_pg_connect, do_pg_query, simple_select, and select functions do
the same thing the versions in submit/index.php have some additional
error handling mechanisms that the ones in database.inc do not have.
Spe
Pasha,
Looking at the patch I'm a little bit conserned. The
"get_table_fields()" is, as you mentioned, no longer used so should be
removed. However the other functions are critical to the submission
script particularly 'do_pg_connect' which opens the connection to the
backend database.
Pasha,
All of the scripts can be run whenever. They should not be saving
state between runs, so there should not be any bad effects on the
database by starting them up late in the game.
The 'periodic-maintenance.pl' script is a postgresql cleaning/
vacuuming script that helps the database
Has anyone tried to use the BLACS tests in ompi-tests with MTT?
IU is considering adding it to our testing matrix and wanted to hear
of any experiences.
Cheers,
Josh
rote:
I don't remember a "past 24 hour" summary taking "24
seconds". Are we seeing a slow down due to an accumulation
of results? I thought the week-long table partitions would
prevent this type of effect?
-Ethan
On Wed, Jan/30/2008 11:00:46AM, Josh Hursey wrote:
Th
This maintenance is complete. The reporter should be operating as
normal.
There are a few other maintenance items, but I am pushing them to the
weekend since it will result in a bit of a slowdown again.
Thanks for your patience.
Cheers,
Josh
On Jan 29, 2008, at 9:47 AM, Josh Hursey wrote
, Josh Hursey wrote:
For the next 24 - 48 hours this is to be expected. Sorry :(
I started some maintenance work last night, and it is taking a bit
longer than I expected (due to integrity constraint checking most
likely). The maintenance scripts are pushing fairly hard on the
database, so I would
For the next 24 - 48 hours this is to be expected. Sorry :(
I started some maintenance work last night, and it is taking a bit
longer than I expected (due to integrity constraint checking most
likely). The maintenance scripts are pushing fairly hard on the
database, so I would expect some s
:
http://svn.open-mpi.org/trac/mtt/ticket/305
-- Josh
On Sep 6, 2007, at 10:04 AM, Josh Hursey wrote:
Weird this looks like a mirror issue again. Below is some more debug
output from MTT on BigRed:
<>
*** Reporter initializing
Evaluating: MTTDa
/submit/index.php
Any thoughts on why this might be happening? It looks like the mirror
check is messed up again.
-- Josh
On Sep 5, 2007, at 11:31 PM, Josh Hursey wrote:
yeah I'll try to take a look at it tomorrow. I suspect that something
is going wrong with the relay, but I can'
yeah I'll try to take a look at it tomorrow. I suspect that something
is going wrong with the relay, but I can't really think of what it
might be at the moment.
-- Josh
On Sep 5, 2007, at 9:11 PM, Jeff Squyres wrote:
Josh / Ethan --
Not getting a serial means that the client is not gettin
Short Version:
--
I just finished the fix, and the submit script is back up and running.
This was a bug that arose in testing, but somehow did not get
propagated to the production database.
Long Version:
-
The new databases uses partition tables to archive test results
ed to get rid of old cookies?
Tim
On Monday 27 August 2007 02:30:17 pm Jeff Squyres wrote:
Is this an effect of "preferfnces" cookies not propagating properly?
On Aug 27, 2007, at 2:26 PM, Josh Hursey wrote:
Weird. I just tried this and it worked fine for me. Showing 25
skampi
runs fo
Weird. I just tried this and it worked fine for me. Showing 25 skampi
runs for IU all trials. Can you try it again?
-- Josh
On Aug 27, 2007, at 2:11 PM, Tim Prins wrote:
All,
First, I have to say the new faster reporter is very nice.
However, I am running into some difficulty with trial ru
ll be replaced with a much mo'better version -- [much]
faster than it was before. Details below.
Begin forwarded message:
From: Josh Hursey
Date: August 24, 2007 1:37:18 PM EDT
To: General user list for the MPI Testing Tool
Subject: [MTT users] MTT Database and Reporter Upgrade **Actio
Short Version:
--
The MTT development group is rolling out newly optimized web frontend
and backend database. As a result we will be taking down the MTT site
at IU Monday, August 27 from 8 am to Noon US eastern time.
During this time you will not be able to submit data to the MTT
That's awesome. Good work :)
-- Josh
On Mar 1, 2007, at 11:59 AM, Ethan Mallove wrote:
Folks,
If some of you hadn't already noticed, reports (see
http://www.open-mpi.org/mtt/) on Test Runs have been taking
an upwards of 5-7 minutes to load as of late. This was due
in part to some database de
en-mpi.org
I'm a bright... http://www.the-brights.net/
___
mtt-users mailing list
mtt-us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
----
Josh Hursey
jjhur...@open-mpi.org
http://www.open-mpi.org/
__
mtt-users mailing list
mtt-us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
--
-Ethan
___
mtt-users mailing list
mtt-us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
Josh Hursey
jjhur...@open-mpi.org
http://www.open-mpi.org/
Is there a 'post_build' flag in the [MPI Install] section? I'd like
to be able to execute a script or 'make distclean' after a version of
a branch has been built and installed.
The problem is that we are getting close to our quota on some of the
machines that we are using (every night we ge
On Nov 6, 2006, at 7:45 AM, Jeff Squyres wrote:
On Nov 3, 2006, at 2:25 PM, Josh Hursey wrote:
I have an INI File that looks something like what is enclose at the
end of this message.
So I have multiple MPI Details sections. It seems like only the first
one is running. Do I have to list
I have an INI File that looks something like what is enclose at the
end of this message.
So I have multiple MPI Details sections. It seems like only the first
one is running. Do I have to list them out somewhere?
As a side question:
Instead of using multiple MPI Details sections, if I use a
IU/Thor Short Story:
-
The IU/thor tests are borked because of the scheduler. Ignore these
results for now.
IU/Thor Longer Story:
-
SLURM is setup to kill any job that's 'idle' for more than N min,
where N is kinda small. We are compiling, but SLURM is
Cisco Systems
___
mtt-users mailing list
mtt-us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
Josh Hursey
jjhur...@open-mpi.org
http://www.open-mpi.org/
On Oct 27, 2006, at 7:39 AM, Jeff Squyres wrote:
On Oct 25, 2006, at 10:37 AM, Josh Hursey wrote:
The discussion started with the bug characteristics of v1.2 versus
the trunk.
Gotcha.
It seemed from the call that IU was the only institution that can
asses this via MTT as noone else spoke
On Oct 25, 2006, at 1:30 PM, Ethan Mallove wrote:
On Wed, Oct/25/2006 10:37:31AM, Josh Hursey wrote:
The discussion started with the bug characteristics of v1.2 versus
the trunk.
It seemed from the call that IU was the only institution that can
asses this via MTT as noone else spoke up
The discussion started with the bug characteristics of v1.2 versus
the trunk.
It seemed from the call that IU was the only institution that can
asses this via MTT as noone else spoke up. Since people were
interested in seeing things that were breaking I suggested that I
start forwarding t
ook at the logs there is no "success=*" entry for any of the
test builds for any of the branches.
Any leads on what is causing this?
On Oct 2, 2006, at 8:04 PM, Jeff Squyres wrote:
K. Am still investigating.
On 10/2/06 7:57 PM, "Josh Hursey" wrote:
Yea I believe so.
k on
the "[I]" to see the stdout/stderr).
--
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems
___
mtt-users mailing list
mtt-us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
Josh Hursey
jjhur...@open-mpi.org
http://www.open-mpi.org/
I'll have to wait until the machine is back unfortunately. They have
restricted logins until they are done with the hw testing. :(
On Oct 2, 2006, at 12:50 PM, Jeff Squyres wrote:
Now worries -- could you send the --verbose output?
On 10/2/06 12:49 PM, "Josh Hursey" wrote:
ger question becomes "why didn't it succeed?".
Do you have --verbose or --debug output for the build, perchance?
If it
failed to build, there should be some output about it.
On 10/1/06 7:32 PM, "Josh Hursey" wrote:
So I dug into this about as much as I can, and foun
thoughts on why this might happen,
Josh
On Sep 29, 2006, at 3:40 PM, Josh Hursey wrote:
Has anyone been using MTT to test the v1.1 nightly?
I have been trying to run the [trivial,ibm] tests against
[trunk,v1.2,v1.1]. MTT will build all the sources and all the tests
with all the sources. It wi
against the trunk and v1.2.
I looked at the logs produced and there didn't seem to be any errors
with the ibm+v1.1 test build. Would there be any other reason this
would happen?
Josh Hursey
jjhur...@open-mpi.org
http://www.open-mpi.org/
On Sep 28, 2006, at 5:21 PM, Ethan Mallove wrote:
On Thu, Sep/28/2006 03:46:38PM, Josh Hursey wrote:
So I have a section that looks like:
[Reporter: IU database]
module = MTTDatabase
I prefer this:
Will that try to contact the DB via the url and dump the file? Is
there a way to keep
iles contain a
ready to be eval'd perl variable, which poster.pl can use.)
-Ethan
On Thu, Sep/28/2006 02:50:43PM, Josh Hursey wrote:
Finally getting a chance to try this out.
I am trying to use the script as perscribed on the webpage and am
getting some errors apparently from the 'eval
d*
have gone to the database). E.g.,
$ ./poster.pl -f 'mttdatabase_debug*.txt'
(Where mttdatabase_debug would be what you supply to the
mttdatabase_debug_filename ini param in the "IU Database"
section.)
I think this would fill in your missing * step below.
Does that soun
allove wrote:
On Tue, Sep/26/2006 02:01:41PM, Josh Hursey wrote:
I'm setting up MTT on BigRed at IU, and due to some visibility
requirements of the compute nodes I segment the MTT operations.
Currently I have a perl script that does all the svn and wget
interactions from the login node, then com
I'm setting up MTT on BigRed at IU, and due to some visibility
requirements of the compute nodes I segment the MTT operations.
Currently I have a perl script that does all the svn and wget
interactions from the login node, then compiles and runs on the
compute nodes. This all seems to work
Is it possible to specify a timeout for a specific test in a test
suite in addition to the test suite as a whole?
For example some of the Intel tests take about 6 minutes to complete
normally, but the rest of the tests usually finish in a minute or so.
I'd like to keep the 1 min tests from
Things are working well at IU. We have nightly and weekly runs going
smoothly on our Odin cluster.
Have not started using it on BigRed (our LoadLeveler scheduled
environment), but hope to do that soonish.
Cheers,
Josh
On Sep 19, 2006, at 10:18 AM, Ethan Mallove wrote:
Folks,
Just checking
I left the old "after_each_exec" tag in there when I added the new
version. The duplication of the key seemed to cause bad things to
happen.
On Sep 14, 2006, at 6:28 PM, Ethan Mallove wrote:
On Thu, Sep/14/2006 05:49:11PM, Josh Hursey wrote:
After iterating a bit with Jeff. It
, 2006, at 5:36 PM, Josh Hursey wrote:
Here you go:
[mpiteam@odin ~/mtt]$ ./client/mtt --mpi-get --mpi-install --scratch /
u/mpiteam/mtt-runs/Testing-09-14-2006-17-14-18 --file /u/mpiteam/
local/etc/ompi-iu-odin-core.ini --verbose --print-time --debug | tee
~/mtt.out
Debug is 1, Verbose is 1
, Ethan Mallove wrote:
On Thu, Sep/14/2006 05:20:23PM, Josh Hursey wrote:
Maybe I jumped the gun a bit, but I just updated and tried to run mtt
and get the following error message when I run:
Reading ini file: /u/mpiteam/local/etc/ompi-iu-odin-core.ini
*** WARNING: Could not read INI file:
/u
___
mtt-users mailing list
mtt-us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
Josh Hursey
jjhur...@open-mpi.org
http://www.open-mpi.org/
On Sep 8, 2006, at 4:25 PM, Jeff Squyres wrote:
On 9/8/06 2:02 PM, "Josh Hursey" wrote:
Is there a way to separate the Compile Phase with the Testing
Phase? I
thought there was but it's not obvious to me how to do that.
Yes. You can specify various switches on the mtt co
Is there a way to separate the Compile Phase with the Testing Phase? I
thought there was but it's not obvious to me how to do that.
Say I want to build 2 branches (trunk, v1.2) on an allocation of 1 node.
Once that is complete then I want to run the tests on an allocation of N
nodes.
How to I tel
On Sep 5, 2006, at 8:43 AM, Jeff Squyres wrote:
IU / HLRS -- How's it going? I see a bunch of submits from you
guys in the
database, so I assume things are running smoothly. But let me know
one way
or another.
IU is setup and running on our Odin cluster.
Nightly we run 8 nodes (16 proc
; wrote:
?
How could that fix your hanging? The code I'm talking about in
MTT is the
part that drops those files. We don't actually *use* those files
in MTT
anywhere -- they're solely for humans to use after the fact...
If you're suddenly running properly, I'm su
, 2006, at 2:30 PM, Jeff Squyres wrote:
Bah!
This is the result of perl expanding $? To 0 -- it seems that I
need to
escape $? So that it's not output as 0.
Sorry about that!
So is this just for the sourcing files, or for your overall (hanging)
problems?
On 8/30/06 2:28 PM, "J
ks for all your help, I'm sure I'll have more questions in the
near future.
Cheers,
Josh
On Aug 30, 2006, at 12:31 PM, Jeff Squyres wrote:
On 8/30/06 12:10 PM, "Josh Hursey" wrote:
MTT directly sets environment variables in its own environment (via
$ENV{whatever} = &q
put.
As for setting the values on *remote* nodes, we do it solely via the
--prefix option. I wonder if --prefix is broken under SLURM...?
That might
be something to check -- youmight be inadvertantly mixing
installations of
OMPI...?
Yep I'll check it out.
Cheers,
Josh
On 8/30/06 10
these when it runs
those tests (solely via --prefix)?
Cheers,
josh
On Aug 30, 2006, at 10:25 AM, Josh Hursey wrote:
I already tried that. However I'm trying it in a couple different
ways and getting some mixed results. Let me formulate the error cases
and get back to you.
Cheers,
Josh
? Just run
the command
manually in batch mode and see if it works. If that works, then the
problem
is with MTT. Otherwise, we have a problem with notification.
Or are you saying that you have already done this?
Ralph
On 8/30/06 8:03 AM, "Josh Hursey" wrote:
yet another point (sorry
not getting
properly notified of the completion of the job. :(
I'll try to investigate a bit further today. Any thoughts on what
might be causing this?
Cheers,
Josh
On Aug 30, 2006, at 9:54 AM, Josh Hursey wrote:
forgot this bit in my mail. With the mpirun just hanging out there I
att
=0x509600) at ../../../opal/threads/condition.h:81
#6 0x00402f52 in orterun (argc=9, argv=0x7fb0b8) at
orterun.c:444
#7 0x004028a3 in main (argc=9, argv=0x7fb0b8) at main.c:13
Seems that mpirun is waiting for things to complete :/
On Aug 30, 2006, at 9:53 AM, Josh Hursey
On Aug 30, 2006, at 7:19 AM, Jeff Squyres wrote:
On 8/29/06 8:57 PM, "Josh Hursey" wrote:
Does this apply to *all* tests, or only some of the tests (like
allgather)?
All of the tests: Trivial and ibm. They all timeout :(
Blah. The trivial tests are simply "hello world&q
On Aug 29, 2006, at 6:57 PM, Jeff Squyres wrote:
On 8/29/06 1:55 PM, "Josh Hursey" wrote:
So I'm having trouble getting tests to complete without timing out in
MTT. It seems that the tests timeout and hang in MTT, but complete
normally outside of MTT.
Does this apply to
Hey all,
So I'm having trouble getting tests to complete without timing out in
MTT. It seems that the tests timeout and hang in MTT, but complete
normally outside of MTT.
Here are some details:
Build:
Open MPI Trunk (1.3a1r11481)
Tests:
Trivial
ibm
BTL:
tcp
self
Nodes/processes
76 matches
Mail list logo