ost.
So my question is this normal? What is when I migrate the sever to another host
with another storage and there is another path to the image cahche?
We use still mitaka at the moment, maybe in a newer version this has already
changed?
Thank you :)
Kind regards,
Michael
Michael Stang
ce weekend.
Kind regards,
Michael
> Tom Fifield hat am 30. September 2016 um 09:52
> geschrieben:
>
>
> On 30/09/16 14:06, Michael Stang wrote:
> > Hello all,
> >
> > I have a question if it is possible to connect 2 Openstack clouds to use
> > the ressource
/eventr.geant.org/events/2527
>
> Cheers,
>
> Saverio
>
>
>
> 2016-09-30 8:06 GMT+02:00 Michael Stang :
> > Hello all,
> >
> > I have a question if it is possible to connect 2 Openstack clouds to use the
> > ressources together?
> >
> > W
Hello all,
I have a question if it is possible to connect 2 Openstack clouds to use the
ressources together?
We have a Mitaka installation at our site and we have colleagues who also have a
mitaka installaton at their site which are both independent at the moment. We
want to create a site-to-si
rsh managedsave is not something that is used in
> live snapshot according to that (I haven’t checked source code myself)
>
>
>
> BR,
>
> Konstantin
>
> [1] https://bugs.launchpad.net/nova/+bug/1334398
>
>
>
>
>
> From: Michael
t; thank you
>
> Saverio
>
>
> 2016-08-25 8:48 GMT+02:00 Michael Stang mailto:michael.st...@dhbw-mannheim.de >:
>> >Hi Konstantin, hi Saverio
> >
> >thank you for your answers.
> >
> >I checked the version, thes
n case it is not about that, then I would try to do it manually using
> something like [2] as guideline to see if it succeeds using Libvirt/QEMU
> without Nova.
>
> BR,
> Konstantin
> [1]
> http://docs.openstack.org/liberty/config-reference/content/list-of-compute-config-options.html
&
installation of mitaka from our
colleagues.
Is this behaviour normal in Mitaka or ist this maybe a bug? Because in Juno we
could do snapshots from running instances without problems.
Regards,
Michael
Michael Stang
Laboringenieur, Dipl. Inf. (FH)
Duale Hochschule Baden-Württemberg Mannheim
Baden
,
Michael
> Michael Stang hat am 26. Juli 2016 um 18:28
> geschrieben:
>
> Hi all,
>
> we got a strange problem on our new mitaka installation. We have this
> messages in the syslog on the block storage node:
>
>
> Jul 25 09:10:33 block1 tgtd: device_mgmt
Hi all,
we got a strange problem on our new mitaka installation. We have this messages
in the syslog on the block storage node:
Jul 25 09:10:33 block1 tgtd: device_mgmt(246) sz:69
params:path=/dev/cinder-volumes/volume-41d6c674-1d0d-471d-ad7d-07e9fab5c90d
Jul 25 09:10:33 block1 tgtd: bs_thread
e config options solved the problem. Thank you, I really
> appreciated it.
>
> What roles do the users have? The ones you are trying to delete the images
> from glance?
>
> Regards,
>
> Lucas.-
>
> 2016-07-20 3:01 GMT-03:00 Michael Stang mailto:m
| Name| Enabled | Description|
> +--+-+-++
> | | default | True| Default Domain |
> +--+-+-++
>
> Regarding your issue, do you have the Proxy Server listenting on the 8080
> port on th
Kind regards,
Michael
> Michael Stang hat am 18. Juli 2016 um 08:44
> geschrieben:
>
> Hi Sam,
>
> thank you for your answer.
>
> I had a look, the swift store endpoint ist listet 3 times in the keystone,
> publicurl admin and internal endpoint. To t
> > > On 15 Jul 2016, at 9:07 PM, Michael Stang
> > > mailto:michael.st...@dhbw-mannheim.de
> > > > wrote:
> >
> > Hi everyone,
> >
> > I tried to setup swift as backend for glance in our new mi
Hi everyone,
I tried to setup swift as backend for glance in our new mitaka installation. I
used this in the glance-api.conf
[glance_store]
stores = swift
default_store = swift
swift_store_create_container_on_put = True
swift_store_region = RegionOne
default_swift_reference = ref1
swift_store_
blems...strange...
Another question has arised, we had in our old juno installation swift used as
backend for glance, but the old method is not working in mitaka anymore. Do
anyone nows a guide for setting up swift as glance backend in mitaka?
Thanks and kind regards,
Michael
> Michael Stang
Hi all,
I set up a new openstack environment with mitaka and was now to the point where
I should be able to start an instance. When I do this the instance starts and
shows running without error, but when I open the console I see this:
Booting from Hard Disk...
Boot failed: not a bootable disk
oud.cfg config file.
>
> I do not actually know what process invokes cloud-init.
>
> I hope the above helps.
>
> Regards,
> Hamza
>
> On 4 July 2016 at 08:31, Michael Stang mailto:michael.st...@dhbw-mannheim.de > wrote:
>> >Hi Hamza,
&
> You can change this behavior in the /etc/cloud/cloud.cfg config file.
>
> Regards,
> Hamza
>
> On 1 July 2016 at 07:02, Michael Stang mailto:michael.st...@dhbw-mannheim.de > wrote:
>> >Hi all,
> >
> >I tried to copy instances (based
Hi all,
I tried to copy instances (based on the ubuntu cloud image) from our production
cloud to our test cloud according to this description
http://docs.openstack.org/user-guide/cli_use_snapshots_to_migrate_instances.html
but when I start the instance on the test cloud the root password ist
> >
> > It's a work in progress, but we're always happy to accept input.
> >
> > Hope this helps, feel free to contact me if you need anything.
> >
> > Roland
> >
> >
> >
> > On 28 June 2016 at 16:07, Michael Stang
> > wrote:
> >&
ogress, but we're always happy to accept input.
>
> Hope this helps, feel free to contact me if you need anything.
>
> Roland
>
>
>
> On 28 June 2016 at 16:07, Michael Stang mailto:michael.st...@dhbw-mannheim.de > wrote:
>> >
> >
Hello all,
we setup a small test environment of Mitaka to learn about the installation and
the new features. Before we try the Upgrade of out Juno production environment
we want to migrate all it’s data to the Mitaka installation as a backup and
also to make tests.
Is there an easy way t
Stang
Cc: Matt Jarvis; OpenStack Operators
Betreff: Re: [Openstack-operators] Shared Storage for compute nodes
On Tue, Jun 21, 2016 at 11:42:45AM +0200, Michael Stang wrote:
:I think I have asked my question not correctly, it is not for the cinder
:backend, I meant the shared storage for the
Hello Saverio,
many thanks, I will have a look :-)
Regards,
Michael
> Saverio Proto hat am 21. Juni 2016 um 12:41 geschrieben:
>
> Hello Michael,
>
> have a look at Openstack Manila and CephFS
>
> Cheers
>
> Saverio
>
>
> 2016
ge backend for Cinder.
>
> On 21 June 2016 at 08:27, Michael Stang mailto:michael.st...@dhbw-mannheim.de > wrote:
>> >Hi,
> >
> >I wonder what is the recommendation for a shared storage for the compute
> > nodes? At the moment we are using
find more options here under Volume drivers:
> > http://docs.openstack.org/liberty/config-reference/content/section_volume-drivers.html
> >
> > Saverio
> >
> >
> > 2016-06-21 9:27 GMT+02:00 Michael Stang :
> >> Hi,
> >>
> >> I wonder what is th
g-reference/content/ceph-rados.html
> http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>
> you find more options here under Volume drivers:
> http://docs.openstack.org/liberty/config-reference/content/section_volume-drivers.html
>
> Saverio
>
>
> 2016-06-21 9:27 GMT+02:00 M
Hi,
I wonder what is the recommendation for a shared storage for the compute nodes?
At the moment we are using an iSCSI device which is served to all compute nodes
with multipath, the filesystem is OCFS2. But this makes it a little unflexible
in my opinion, because you have to decide how many com
I think we give it a try then, thank you :-)
Kind regards,
Michael
> Simon Leinen hat am 18. Juni 2016 um 16:21
> geschrieben:
>
>
> Michael Stang writes:
> > Is this one the actual guid for upgrades, and is it valid for every
> > upgrade or ony for s
Hello Saverio,
thank you for the link, I will have a look at this article.
In place upgrade would be the best way, becaus we do not have to setup
everything from scratch again, also at the moment it would be no problem to make
a new installation because there is not much data by now in our inst
nvironment?
Is there any well described how to or best practise guide for such an upgrade?
Any ideas or help would be welcome :-)
Kind regards,
Michael
Michael Stang
Laboringenieur, Dipl. Inf. (FH)
Duale Hochschule Baden-Württemberg Mannheim
Baden-Wuerttemberg Cooperative State University M
32 matches
Mail list logo