just a bump.
can anyone offer any advice on this cinder driver
cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver?
thanks!
-- Jim
On Thu, Oct 11, 2018 at 4:08 PM Jim Okken wrote:
> hi All,
>
> not sure if I can find an answer here to this specific situation with the
> cinder ba
hi All,
not sure if I can find an answer here to this specific situation with the
cinder backend driver cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver.
If not how can I get in touch with someone more familiar with
cinder.volume.drivers.san.hp.hpmsa_fc.HPMSAFCDriver
we have a HP MSA storage a
t; 2. If UI then are you deploying only one node after selection ?
>
>
> Regards
> Jitendra Bhaskar
>
> Regards
> Bhaskar
> +1-469-514-7986
>
>
>
>
>
> On Tue, May 1, 2018 at 12:21 PM, Jim Okken wrote:
>
>> Hi list,
>>
>>
>>
Hi list,
We’ve created a pretty large openstack Newton HA environment using fuel.
After initial hiccups with deployment (not all fuel troubles) we can now
add additional compute nodes to the environment with ease!
Thank you for all who’ve worked on all the projects to make this product.
My q
compute logs.
>
> Check that your clock is in-sync with NTP or you might experience that the
> alive checks in the database exceeds the service_down_time config value.
>
> On 12/19/2017 12:09 AM, Jim Okken wrote:
>
> hi list,
>
> hoping someone could shed some light
hi list,
hoping someone could shed some light on this issue I just started seeing
today
all my compute nodes started showing as "Down" in the Horizon ->
Hypervisors -> Compute Nodes tab
root@node-1:~# nova service-list
+-+--+---+--+-+---+-
o 4.4.0-98 and test. Do
> anyone think this kernel change could break openstack?
>
> In the kernel change log I found a fix for a specific HP server in
> 4.4.0-98 (not the same as our server but somewhat similar)
>
> thanks!
>
> -- Jim
>
> On Mon, Oct 23, 2017 at 10:25
then theprovisioning and
deployment went perfectly
thanks
-- Jim
On Thu, Sep 28, 2017 at 5:02 PM, Jim Okken wrote:
> I ran "fuel2 node update -H blade13 20" just to get out of the node-*
> naming convention, as someone suggested
>
>
>
> The deploy still names the node node-11 an
e log I found a fix for a specific HP server in 4.4.0-98
(not the same as our server but somewhat similar)
thanks!
-- Jim
On Mon, Oct 23, 2017 at 10:25 PM, Jim Okken wrote:
> = UPDATE 10/23 ==
>
> we have been trying different things to get better debug we disabled
> ra
20T19:00:19.698117+00:00 node-90 kernel: [97583.653203]
[] ret_from_fork+0x3f/0x70
2017-10-20T19:00:19.698118+00:00 node-90 kernel: [97583.653204]
[] ? kthread_create_on_node+0x1e0/0x1e0
2017-10-20T19:00:19.698123+00:00 node-90 kernel: [97583.653206] ---[ end
trace d7e73079b38e57b4 ]---
-- J
On Tue, Sep 26, 2017 at 12:00 PM, Jim Okken wrote:
> also I should add, I dont have the original hard drives in the system so
> it isn't because it is booting the old OS where these node names were set.
> this is definitely the newly installed OS being given the wroing hostname
>
d and find
where these old node names are being saved?
thanks!
-- Jim
On Mon, Sep 25, 2017 at 6:03 PM, Jim Okken wrote:
> hi all,
>
> I am using Fuel 10.
>
> i have 2 nodes I am trying to deploy as compute nodes. at one time in the
> past I was attempting to deploy them too. I
hi all,
I am using Fuel 10.
i have 2 nodes I am trying to deploy as compute nodes. at one time in the
past I was attempting to deploy them too. I assume back then their node
names were node-11 and node-20.
they were never successfully deploy and now I've worked out their hardware
issues and are
Hi all,
In danube disk provisioning for a compute node, the smallest disk/partition
size for the base system is 54GB.
After I deploy a compute node I see 44GB free of the 54GB. So it seems
something smaller that 54GB can be used.
Can I somehow change the setting for the smallest disk/part
ddie Yen wrote:
> Hi
>
> Can you describe your disk configuration and partitioning?
>
> 2017-09-02 4:57 GMT+08:00 Jim Okken :
>
>> Hi all,
>>
>>
>>
>> Can you offer and insight in this failure I get when deploying 2 compute
>> nodes
Hi all,
Can you offer and insight in this failure I get when deploying 2 compute
nodes using Fuel 10, please? (contoller etc nodes are all deployed/working)
fuel_agent.cmd.agent PartitionNotFoundError: Partition
/dev/mapper/3600c0ff0001ea00f521fa4590100-part2 not found after
creation fuel
orm double duty (i.e. ‘hyperconverged’)
>
> Hopefully this gives you a little bit of information regarding how Ceph is
> used.
>
>
> Mike Smith
> Lead Cloud System Architect
> Overstock.com
>
>
>
> On Aug 24, 2017, at 9:22 PM, Jim Okken wrote:
>
> Ive been lea
rs and
onto the storage node? (aka: move ephemeral from local to over the network?
Thanks
--jim
On Thu, Aug 24, 2017 at 12:14 PM, Jim Okken wrote:
> Hi all,
>
>
> We have a pretty complicated storage setup and I am not sure how to
> configure Fuel for deployment of the st
Hi all,
We have a pretty complicated storage setup and I am not sure how to
configure Fuel for deployment of the storage nodes. I'm using Fuel
10/Newton. Plus i'm a bit confused on some of the storage aspects
(image/glance, volume/cinder, ephemeral/?.)
We have 3 nodes dedicated to be storage no
19 matches
Mail list logo