Good luck with whatever you are doing next Saverio, you've been a great
asset to the community and will be missed!
On Thu, 6 Sep 2018 at 23:43, Saverio Proto wrote:
> Hello,
>
> I will be leaving this mailing list in a few days.
>
> I am going to a new job and I will not be involved with
e to create some type of
> matrix?
>
> On Wed, Jun 13, 2018 at 8:18 AM, Blair Bethwaite <
> blair.bethwa...@gmail.com> wrote:
>
>> Hi Jay,
>>
>> Ha, I'm sure there's some wisdom hidden behind the trolling here?
>>
>> Believe me, I have tried to push the
Lol! Ok, forgive me, I wasn't sure if I had regular or existential Jay on
the line :-).
On Thu., 14 Jun. 2018, 00:24 Jay Pipes, wrote:
> On 06/13/2018 10:18 AM, Blair Bethwaite wrote:
> > Hi Jay,
> >
> > Ha, I'm sure there's some wisdom hidden behind the trolling here?
, 00:03 Jay Pipes, wrote:
> On 06/13/2018 09:58 AM, Blair Bethwaite wrote:
> > Hi all,
> >
> > Wondering if anyone can share experience with architecting Nova KVM
> > boxes for large capacity high-performance storage? We have some
> > particular use-cases t
Hi all,
Wondering if anyone can share experience with architecting Nova KVM boxes
for large capacity high-performance storage? We have some particular
use-cases that want both high-IOPs and large capacity local storage.
In the past we have used bcache with an SSD based RAID0 write-through
Hi Jon,
Following up to the question you asked during the HPC on OpenStack
panel at the summit yesterday...
You might have already seen Daniel Berrange's blog on this topic:
Hi all,
Reminder there's a Scientific SIG meeting coming up in about 6.5
hours. All comers welcome.
(https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_March_20th_2018)
IRC Meeting March 20th 2018
2018-03-20 2100 UTC in channel #openstack-meeting
# Forum brainstorming
the source of a firmware write, or if it's just
something that NVIDIA's own drivers check by reading the firmware ROM.
On Tue., 20 Mar. 2018, 17:47 Blair Bethwaite, <blair.bethwa...@monash.edu>
wrote:
> Hi all,
>
> This has turned into a bit of a screed I'm afraid...
>
>
ay, hopefully all this is useful in some way. Perhaps if we get enough
customers pressuring NVIDIA SAs to disclose the PCIe security info, it
might get us somewhere on the road to securing passthrough.
Cheers,
--
Blair Bethwaite
Senior HPC Consultant
Monash eResearch Centre
Monash University
Please do not default to deleting it, otherwise someone will eventually be
back here asking why an irate user has just lost data. The better scenario
is that the rebuild will fail (early - before impact to the running
instance) with a quota error.
Cheers,
On Thu., 15 Mar. 2018, 00:46 Matt
Hi all,
Has anyone else tried this combination?
We've set up some new computes with dual Xeon Gold 6150s (18c/36t), so
72 logical cores with hyperthreading. We're trying to launch a Windows
Server 2012 R2 guest (hyperv enlightenments enabled via image
properties os_type=windows, and virtio 141
This is starting to veer into magic territory for my level of
understanding so beware... but I believe there are (or could be
depending on your exact hardware) PCI config space considerations.
IIUC each SRIOV VF will have its own PCI BAR. Depending on the window
size required (which may be
+1!
It may also be worth testing a step where Nova & Neutron remain at N-1.
On 20 December 2017 at 04:58, Matt Riedemann wrote:
> During discussion in the TC channel today [1], we got talking about how
> there is a perception that you must upgrade all of the services
Hi all - please note this conversation has been split variously across
-dev and -operators.
One small observation from the discussion so far is that it seems as
though there are two issues being discussed under the one banner:
1) maintain old releases for longer
2) do stable releases less
I missed this session but the discussion strikes a chord as this is
something I've been saying on my user survey every 6 months.
On 11 November 2017 at 09:51, John Dickinson wrote:
> What I heard from ops in the room is that they want (to start) one release a
> year who's branch
Hi again all,
There's still room for one or two more lightning talks in this session
tomorrow. And as has become tradition there will be a prize for the
best talk thanks to Arkady Kanevsky from Dell!
Please sign up and share your stories - we don't bite.
On 18 October 2017 at 08:27, Blair
Hi all,
Today's meeting is cancelled as the usual chairs are en-route to Sydney!
Apologies for the short notice - timezones meant I only just confirmed this.
--
Cheers,
~Blairo
___
OpenStack-operators mailing list
shout out and/or add to
https://etherpad.openstack.org/p/SYD-forum-Ceph-OpenStack-BoF.
Also, hope to see some of the core team there!
Cheers,
On 7 July 2017 at 13:47, Blair Bethwaite <blair.bethwa...@gmail.com> wrote:
> Hi all,
>
> Are there any "official" plans to h
Hi all,
We have an IRC meeting today at 1100 UTC in channel #openstack-meeting.
A light agenda today, mainly looking for input into the SC17 OpenStack
in HPC BOF
(http://sc17.supercomputing.org/presentation/?id=bof208=sess389).
--
Cheers,
~Blairo
Similarly, if you have the capability in your compute gear you could do
SR-IOV and push the problem entirely into the instance (but then you miss
out on Neutron secgroups and have to rely entirely on in-instance
firewalls).
Cheers,
On 25 October 2017 at 01:41, Jeremy Stanley
Hi Saverio,
On 13 October 2017 at 09:05, Saverio Proto wrote:
> I found this link in my browser history:
> https://bugs.launchpad.net/ubuntu/+source/kvm/+bug/1583819
Thanks. Yes, have seen that one too.
> Is it the same messages that you are seeing in Xenial ?
There are a
Hi all,
Once again the Scientific SIG (nee WG) has a dedicated lightning talk
session happening at the Sydney Summit. If have any interesting
OpenStack + Science and/or HPC stories then please through you hat in
the ring at:
https://etherpad.openstack.org/p/sydney-scientific-sig-lightning-talks
Hi all,
Has anyone seen guest crashes/freezes associated with KVM unhandled rdmsr
messages in dmesg on the hypervisor?
We have seen these messages before but never with a strong correlation to
guest problems. However over the past couple of weeks this is happening
almost daily with consistent
Hi all,
We have an IRC meeting today at 1100 UTC in channel #openstack-meeting
We are short a couple of chairs today but would like to start planning
out our picks from the Summit schedule and confirming interest in
presentation slots for our Scientific Lightening Talk session. Plus I
have a
Also CC-ing os-ops as someone else may have encountered this before
and have further/better advice...
On 27 September 2017 at 18:40, Blair Bethwaite
<blair.bethwa...@gmail.com> wrote:
> On 27 September 2017 at 18:14, Stephen Finucane <sfinu...@redhat.com> wrote:
>> What yo
Hi all,
If you happen to have been following along with recent discussions
about introducing OpenStack SIGs then this won't come as a surprise.
PS: the openstack-sig mailing list has been minted - get on it!
The meta-SIG is now looking for existing WGs who wish to convert to
SIGs, see
Hi Stig,
It occurs to me we have not yet had any discussion on the recent WG->SIG
proposal, which includes the [scientific] posse, so adding that to the
agenda too.
Cheers,
On 5 September 2017 at 19:29, Stig Telfer wrote:
> Hello all -
>
> We have a Scientific WG
7 at 07:55, Blair Bethwaite <blair.bethwa...@gmail.com> wrote:
> Hi all,
>
> I've been (very slowly) working on some docs detailing how to setup an
> OpenStack Nova Libvirt+QEMU-KVM deployment to provide GPU-accelerated
> instances. In Boston I hope to chat to some of
Hi Alex,
I just managed to take a half hour to look at this and have a few
questions/comments towards making a plan for how to proceed with
moving the Ops Guide content to the wiki...
1) Need to define wiki location and structure. Curiously at the moment
there is already meta content at
; Stig
>
>
> On 21 Jun 2017, at 09:13, Blair Bethwaite <blair.bethwa...@monash.edu>
> wrote:
>
> Thanks Pierre. That's also my preference.
>
> Just to be clear, today's 0900 UTC meeting (45 mins from now) is going
> ahead at the usual time.
>
> On 21 Jun. 2
There is a not insignificant degree of irony in the fact that this
conversation has splintered so that anyone only reading openstack-operators
and/or user-committee is missing 90% of the picture Maybe I just need a
new ML management strategy.
I'd like to add a +1 to Sean's suggestion about
Hi Alex,
On 2 June 2017 at 23:13, Alexandra Settle wrote:
> O I like your thinking – I’m a pandoc fan, so, I’d be interested in
> moving this along using any tools to make it easier.
I can't realistically offer much time on this but I would be happy to
help (ad-hoc)
Thanks Pierre. That's also my preference.
Just to be clear, today's 0900 UTC meeting (45 mins from now) is going
ahead at the usual time.
On 21 Jun. 2017 5:21 pm, "Pierre Riteau" <prit...@uchicago.edu> wrote:
Hi Blair,
I strongly prefer 1100 UTC.
Pierre
> On 21 Jun 20
Hi all,
The Scientific-WG's 0900 UTC meeting time (it's the non-US friendly time)
is increasingly difficult for me to make. A couple of meetings back we
discussed changing it and had general agreement. The purpose here is to get
a straw poll of preferences for -2 or +2 to the current time, i.e.,
Hi Alex,
Likewise for option 3. If I recall correctly from the summit session
that was also the main preference in the room?
On 2 June 2017 at 11:15, George Mihaiescu wrote:
> +1 for option 3
>
>
>
> On Jun 1, 2017, at 11:06, Alexandra Settle wrote:
Thanks Jay,
I wonder whether there is an easy-ish way to collect stats about the
sorts of errors deployers see in that catchall, so that when this
comes back around in a release or two there might be some less
anecdotal data available...?
Cheers,
On 24 May 2017 at 06:43, Jay Pipes
On 23 May 2017 at 05:33, Dan Smith wrote:
> Sure, the diaper exception is rescheduled currently. That should
> basically be things like misconfiguration type things. Rescheduling
> papers over those issues, which I don't like, but in the room it surely
> seemed like operators
please click here
<https://www.meetup.com/members/42165752/>
To unsubscribe from special announcements from your Organizer(s), click here
<https://www.meetup.com/Australian-OpenStack-User-Group/optout/?submit=true&_ms_unsub=true=orgBdcst>
Meetup, POB 4668 #37895 NY NY USA 10163
On 2 May 2017 at 05:50, Jay Pipes wrote:
> Masahito Muroi is currently marked as the moderator, but I will indeed be
> there and happy to assist Masahito in moderating, no problem.
The more the merrier :-).
There is a rather unfortunate clash here with the Scientific-WG BoF
h a temporal aspect to them (i.e.
> allocations in the future).
>
> A separate system (hopefully Blazar) is needed to manage the time-based
> associations to inventories of resources over a period in the future.
>
> Best,
> -jay
>
>>> I'm not sure how the above i
Thanks Rochelle. I encourage everyone to dump thoughts into the
etherpad (https://etherpad.openstack.org/p/BOS-forum-special-hardware
- feel free to garden it as you go!) so we can have some chance of
organising a coherent session. In particular it would be useful to
know what is going to be most
On 29 April 2017 at 01:46, Mike Dorman wrote:
> I don’t disagree with you that the client side choose-a-server-at-random is
> not a great load balancer. (But isn’t this roughly the same thing that
> oslo-messaging does when we give it a list of RMQ servers?) For us it’s
On 28 April 2017 at 21:17, Sean Dague <s...@dague.net> wrote:
> On 04/28/2017 12:50 AM, Blair Bethwaite wrote:
>> We at Nectar are in the same boat as Mike. Our use-case is a little
>> bit more about geo-distributed operations though - our Cells are in
>> different St
We at Nectar are in the same boat as Mike. Our use-case is a little
bit more about geo-distributed operations though - our Cells are in
different States around the country, so the local glance-apis are
particularly important for caching popular images close to the
nova-computes. We consider these
Hi all,
A quick FYI that this Forum session exists:
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18803/special-hardware
(https://etherpad.openstack.org/p/BOS-forum-special-hardware) is a
thing this Forum.
It would be great to see a good representation from both the Nova
requests for the same
> aggregate).
>
> Is this feasible?
>
> Tim
>
> On 04.04.17, 19:21, "Jay Pipes" <jaypi...@gmail.com> wrote:
>
> On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
> > Hi Jay,
> >
> > On 4 April 2017 at 00:20, Jay
Hi Jay,
On 5 April 2017 at 03:21, Jay Pipes <jaypi...@gmail.com> wrote:
> On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
>> That's something of an oversimplification. A reservation system
>> outside of Nova could manipulate Nova host-aggregates to "cordon off"
&
Hi Jay,
On 4 April 2017 at 00:20, Jay Pipes wrote:
> However, implementing the above in any useful fashion requires that Blazar
> be placed *above* Nova and essentially that the cloud operator turns off
> access to Nova's POST /servers API call for regular users. Because if
of
resources (volumes, floating IPs, etc.). Software licenses can be
another type.
==
(https://etherpad.openstack.org/p/BOS-UC-brainstorming-scientific-wg)
Cheers,
--
Blair Bethwaite
Senior HPC Consultant
Monash eResearch Centre
Monash University
Room G26, 15 Innovation Walk, Clayton Campus
Clayton
bug:
> http://tracker.ceph.com/issues/19056
>
> is anyone else hitting this ?
>
> Saverio
>
> 2017-03-27 22:11 GMT+02:00 John Dickinson <m...@not.mn>:
> >
> >
> > On 27 Mar 2017, at 4:39, Blair Bethwaite wrote:
> >
> >> Hi all,
> >>
Hi Melvin,
Just to confirm - the Forum will run Monday through Thursday, and
presumably the session scheduling will be flexible to meet the needs of the
leads/facilitators?
Cheers,
b1airo
On 21 Mar. 2017 6:56 am, "Melvin Hillsman" wrote:
> Hey everyone!
>
> We have made
Hi all -
We have a Scientific WG IRC meeting coming up in a few hours (Wednesday at
0900 UTC) in channel #openstack-meeting. All welcome.
The agenda has one simple goal:
Follow-up on and finalise Boston Forum proposals and assign leaders to
submit.
Cheers,
--
Blair Bethwaite
Senior HPC
Hi all,
Does anyone have any recommendations for good tools to perform
file-system/tree backups and restores to/from a (Ceph RGW-based)
object store (Swift or S3 APIs)? Happy to hear about both FOSS and
commercial options please.
I'm interested in:
1) tools known to work or not work at all for a
Could just avoid Glance snapshots and indeed Nova ephemeral storage
altogether by exclusively booting from volume with your ITAR volume type or
AZ. I don't know what other ITAR regulations there might be, but if it's
just what JM mentioned earlier then doing so would let you have ITAR and
non-ITAR
On 22 March 2017 at 13:33, Jonathan Mills wrote:
>
> To what extent is it possible to “lock” a tenant to an availability zone,
> to guarantee that nova scheduler doesn’t land an ITAR VM (and possibly the
> wrong glance/cinder) into a non-ITAR space (and vice versa)…
>
Yes,
Dims, it might be overkill to introduce multi-Keystone + federation (I just
quickly skimmed the PDF so apologies if I have the wrong end of it)?
Jon, you could just have multiple cinder-volume services and backends. We
do this in the Nectar cloud - each site has cinder AZs matching nova AZs.
By
Hi Chris,
On 17 Mar. 2017 15:24, "Chris Friesen" <chris.frie...@windriver.com> wrote:
On 03/16/2017 07:06 PM, Blair Bethwaite wrote:
Statement: breaks bin packing / have to match flavor dimensions to hardware
> dimensions.
> Comment: neither of these ring true to me giv
There have been previous proposals (and if memory serves, even some
blueprints) for API extensions to allow this but they have apparently
stagnated. On the face of it I think OpenStack should support this (more
choice = win!) - doesn't mean that every cloud needs to use the feature. Is
it worth
1st_2017
> [2] http://eavesdrop.openstack.org/#Scientific_Working_Group
>
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
>
--
Blair Bethwaite
Sen
Hi all -
We have a meeting coming up in about 12 hours:
2017-02-15 0900 UTC in channel #openstack-meeting
This is substantively a repeat of last week's agenda for alternate timezones
- Boston Declaration update from Martial
- Hypervisor tuning update from Blair
- Blair's experiences with RoCE
Hi Tim,
We did wonder in last week's meeting whether quota management and nested
project support (particularly which flows are most important) would be a
good session for the Boston Forum...? Would you be willing to lead such a
discussion?
Cheers,
On 19 January 2017 at 19:59, Tim Bell
On 5 January 2017 at 19:47, Rui Chen wrote:
> Ah, Adam, got your point, I found two related Nova blueprints that were
> similar with your idea,
> but there are not any activities about them from 2014, I hadn't dive deep
> into these comments,
> you might get some
Hi Adam,
On 5 January 2017 at 08:48, Adam Lawson wrote:
> Just a friendly bump. To clarify, the ideas being tossed around are to host
> QCOW images on each Compute node so the provisioning is faster (i.e. less
> dependency on network connectivity to a shared back-end). I need
Hi Conrad,
On 20 December 2016 at 09:24, Kimball, Conrad wrote:
> · Dedicated instances: an OpenStack tenant can deploy VM instances
> that are guaranteed to not share a compute host with any other tenant (for
> example, as the tenant I want physical
Hi all -
In the fashion of JITS (just in time scheduling), we OpenStack+HPC folk are
planning to converge at P.F. Chang's (https://goo.gl/maps/YN8v26CBctp
- West 300 South) at 8.30pm following on from the technical programme
reception.
Hope to see you there!
Cheers,
Blair
Hi folks,
There's a superuser blog live now detailing OpenStack-related
goings-on at SC this week:
http://superuser.openstack.org/articles/openstack-supercomputing-2016/
Cheers,
--
Blair Bethwaite
Senior HPC Consultant
Monash eResearch Centre
Monash University
Room G26, 15 Innovation Walk
Devil's advocate - what is "full enough"? Surely another channel is
essentially free and having flexibility in available timing is of utmost
importance?
On 8 Nov 2016 5:37 PM, "Tony Breeds" wrote:
> On Mon, Nov 07, 2016 at 05:52:43PM +0100, lebre.adr...@free.fr wrote:
>
Lol! I don't mind - Microsoft do support and produce some pretty good
research, I just wish they'd fix licensing!
On 27 October 2016 at 16:11, Jonathan D. Proulx <j...@csail.mit.edu> wrote:
> On Thu, Oct 27, 2016 at 04:08:26PM +0200, Blair Bethwaite wrote:
> :On 27 October 2016 at 16:
Hi George,
On 27 October 2016 at 16:15, George Mihaiescu wrote:
> Did you try playing with Nova's policy file and limit the scope for
> "compute_extension:console_output": "" ?
No, interesting idea though... I suspect it's actually the
get_*_console policies we'd need to
On 27 October 2016 at 16:02, Jonathan D. Proulx wrote:
> don't put a getty on the TTY :)
Do you know how to do that with Windows? ...you can see the desire for
sandboxing now :-).
--
Cheers,
~Blairo
___
OpenStack-operators
that allows reset of the
guest, is not desirable.
On 13 October 2016 at 04:37, Blair Bethwaite <blair.bethwa...@gmail.com>
wrote:
> Hi all,
>
> Does anyone know whether there is a way to disable the novnc console on a
> per instance basis?
>
> Cheers,
> Bla
or your institution have been implementing some
> bright ideas that take OpenStack into new territory for research computing
> use cases, lets hear it!
>
> Please follow up to me and Blair (Scientific WG co-chairs) if you’re
> interested in speaking and would like to bag a slot.
>
Hi Adam,
I agree somewhat, capacity management and growth at scale is something
of a pain. Ceph gives you a hugely powerful and flexible way to manage
data-placement through crush but there is very little quality info
about, or examples of, non-naive crushmap configurations.
I think I understand
Hi all,
Does anyone know whether there is a way to disable the novnc console on a
per instance basis?
Cheers,
Blair
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
; SOC = Path traverses a socket-level link (e.g. QPI)
> PHB = Path traverses a PCIe host bridge
> PXB = Path traverses multiple PCIe internal switches
> PIX = Path traverses a PCIe internal switch
>
>
> Cheers,
> Andrew
>
>
> Andrew J. Younge
> School of Informa
s session or two. Also I think it could be discussed in the
> Nova section. As a stretch, we could cover on lightning talks. There is also
> the Friday work sessions. So I think plenty of options. We also have at
> least three placeholder sessions.
>
>
> On Oct 4, 2016 11:28 PM, &quo
Hi all,
I've just had a look at this with a view to adding 10 minutes
somewhere on what to do with the hypervisor tuning guide, but I see
the free-form notes on the etherpad have been marked as "Old", so
figure it's better to discuss here first... Could maybe fit under
ops-nova or ops-hardware?
Nice! But I'm curious, why the need to migrate?
On 5 October 2016 at 13:29, Xav Paice wrote:
> On Wed, 2016-10-05 at 13:28 +1300, Xav Paice wrote:
>> On Tue, 2016-10-04 at 17:48 -0600, Curtis wrote:
>> > Maybe you have someone on staff who loves writing lua (for haproxy)? :)
sample fails when
checking their ability to communicate with each other. Is there some
magic config I might be missing, did you need to make any PCI-ACS
changes?
Best regards,
Blair
On 16 March 2016 at 07:57, Blair Bethwaite <blair.bethwa...@gmail.com> wrote:
>
> Hi Andrew,
>
> On 1
> going with this that some of the science clouds share some of the
> attributes above ?
>
> Matt
>
> On 22 September 2016 at 00:40, Blair Bethwaite <blair.bethwa...@gmail.com>
> wrote:
>
>> Hi Matt,
>>
>> At considerable risk of heading down a rabbit ho
Hi Matt,
At considerable risk of heading down a rabbit hole... how are you defining
"public" cloud for these purposes?
Cheers,
Blair
On 21 September 2016 at 18:14, Matt Jarvis
wrote:
> Given there are quite a few public cloud operators in Europe now, is there
>
Following on from Edmund's issues... People talking about doing this
typically seem to cite cgroups as the way to avoid CPU and memory
related contention - has anyone been successful in e.g. setting up
cgroups on a nova qemu+kvm hypervisor to limit how much of the machine
nova uses?
On 1
Hi Stig,
When you say IB are you specifically talking about link-layer, or more the
RDMA capability and IB semantics supported by the drivers and APIs (so both
native IB and RoCE)?
Cheers,
On 17 Aug 2016 2:28 AM, "Stig Telfer" wrote:
> Hi All -
>
> I’m looking for
We discussed Blazar fairly extensively in a couple of recent
scientific-wg meetings. I'm having trouble searching out right the irc
log to support this but IIRC the problem with Blazar as is for the
typical virtualised cloud (non-Ironic) use-case is that it uses an
old/deprecated Nova API
On 4 August 2016 at 12:48, Sam Morrison wrote:
>
>> On 4 Aug 2016, at 3:12 AM, Kris G. Lindgren wrote:
>>
>> We do something similar. We give everyone in the company an account on the
>> internal cloud. By default they have a user- project. We have
On 1 August 2016 at 13:30, Marcus Furlong wrote:
> Looks like there is a bug open which suggests that it should be using
> RPC calls, rather than commands executed over ssh:
>
> https://bugs.launchpad.net/nova/+bug/1459782
I agree, no operator in their right mind wants to
Sounds like a recipe for confusion?
On 1 August 2016 at 10:23, Steven Dake (stdake) wrote:
>
>
> On 7/31/16, 7:13 AM, "Jay Pipes" wrote:
>
>>On 07/29/2016 11:35 PM, Steven Dake (stdake) wrote:
>>> Hey folks,
>>>
>>> In Kolla we have a significant bug in
ng whether anyone else has been down this path yet?
Cheers,
On 20 July 2016 at 12:57, Blair Bethwaite <blair.bethwa...@gmail.com> wrote:
> Thanks for the confirmation Joe!
>
> On 20 July 2016 at 12:19, Joe Topjian <j...@topjian.net> wrote:
>> Hi Blair,
>>
>>
le currently being replicated). Since 2.7, Swift take care of that:
> https://github.com/openstack/swift/blob/master/CHANGELOG#L226
>
>
>
> Le Mercredi 20 Juillet 2016 10:17 CEST, Blair Bethwaite
> <blair.bethwa...@gmail.com> a écrit:
>
>> Hi all,
>>
>> As
Hi all,
As per the subject, wondering where these files come from, e.g.,:
root@stor010:/srv/node/sdc1/objects# ls -la
./109794/359/6b389b24749b7046344ffd2a42aab359
total 1195784
drwxr-xr-x 2 swift swift 4096 Jun 8 04:11 .
drwxr-xr-x 3 swift swift53 May 22 05:05 ..
-rw--- 1
w and have had
> several users report success.
>
> Thanks,
> Joe
>
> On Tue, Jul 19, 2016 at 5:06 PM, Blair Bethwaite <blair.bethwa...@gmail.com>
> wrote:
>>
>> Hilariously (or not!) we finally hit the same issue last week once
>> folks actually started trying
Thu, Jul 07, 2016 at 11:13:29AM +1000, Blair Bethwaite wrote:
> :Jon,
> :
> :Awesome, thanks for sharing. We've just run into an issue with SRIOV
> :VF passthrough that sounds like it might be the same problem (device
> :disappearing after a reboot), but haven't yet investigated de
On 30 June 2016 at 05:17, Gustavo Randich wrote:
>
> - other?
FWIW, the other approach that might be suitable (depending on your
project/tenant isolation requirements) is simply using a flat provider
network (or networks, i.e., VLAN per project) within your existing
Hi all,
Scientific-WG regular meeting is on soon, draft agenda below and at
https://wiki.openstack.org/wiki/Scientific_working_group.
2016-07-12 2100 UTC in channel #openstack-meeting
# Review of Activity Areas and opportunities for progress
## Bare metal
### Networking
Hi all,
Just pondering summit talk submissions and wondering if anyone else
out there is interested in participating in a HPFS panel session...?
Assuming we have at least one person already who can cover direct
mounting of Lustre into OpenStack guests then it'd be nice to find
folks who have
Jon,
Awesome, thanks for sharing. We've just run into an issue with SRIOV
VF passthrough that sounds like it might be the same problem (device
disappearing after a reboot), but haven't yet investigated deeply -
this will help with somewhere to start!
By the way, the nouveau mention was because
Hi Jon,
Do you have the nouveau driver/module loaded in the host by any
chance? If so, blacklist, reboot, repeat.
Whilst we're talking about this. Has anyone had any luck doing this
with hosts having a PCI-e switch across multiple GPUs?
Cheers,
On 6 July 2016 at 23:27, Jonathan D. Proulx
Hi Álvaro, hi David -
NB: adding os-ops.
David, we have some real-time Lustre war stories we can share and
hopefully provide some positive conclusions to come Barcelona. I've
given an overview of what we're doing below. Are there any specifics
you were interested in when you raised Lustre in the
llcome Trust Genome Campus, Hinxton, Cambridge, CB10 1SD, UK
>>> Email: da...@ebi.ac.uk
>>>
>>>> On 28 Jun 2016, at 13:42, <alexander.di...@stfc.ac.uk>
>>>> <alexander.di...@stfc.ac.uk> wrote:
>>>>
>>>> 0900 would work bette
Hi Roland -
GUTS looks cool! But I took Michael's question to be more about
control plane data than end-user instances etc...?
Michael - If that's the case then you probably want to start with
dumping your present Juno DBs, importing into your Mitaka test DB and
then attempting the migrations to
1 - 100 of 111 matches
Mail list logo