On 06/23/2016 Daniel Berrange wrote (lost attribution in thread):
> Our long term goal is that 100% of all network storage will be connected
>
to directly by QEMU. We already have the ability to partially do this with
> iSCSI, but it is lacking support for multipath. As & when that gap is
> addre
Daniel, Thanks. Looking for a sense of direction.
Clearly there is some range of opinion, as Walter indicates. :)
Not sure you are get to 100% direct connection to QEMU. When there is
dedicated hardware to do off-board processing of the connection to storage,
you might(?) be stuck routing through
Comments inline.
On Thu, Jun 16, 2016 at 10:13 AM, Matt Riedemann wrote:
> On 6/16/2016 6:12 AM, Preston L. Bannister wrote:
>
>> I am hoping support for instance quiesce in the Nova API makes it into
>> OpenStack. To my understanding, this is existing function in Nova, just
I am hoping support for instance quiesce in the Nova API makes it into
OpenStack. To my understanding, this is existing function in Nova, just
not-yet exposed in the public API. (I believe Cinder uses this via a
private Nova API.)
Much of the discussion is around disaster recovery (DR) and NFV - w
QEMU has the ability to directly connect to iSCSI volumes. Running the
iSCSI connections through the nova-compute host *seems* somewhat
inefficient.
There is a spec/blueprint and implementation that landed in Kilo:
https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built
things backported from later kernels, though they
> may not have backported the specific improvements you're looking for.
>
> I think CentOS is using qemu 2.3, which is pretty new. Not sure how new
> their libiscsi is though.
>
> Chris
>
> On 03/07/2016 12:25 AM, Preston
wrote:
> On 03/03/2016 01:13 PM, Preston L. Bannister wrote:
>
> > Scanning the same volume from within the instance still gets the same
>> > ~450MB/s that I saw before.
>>
>> Hmmm, with iSCSI inbetween that could be the TCP memcpy limitation.
>>
&
Note that my end goal is to benchmark an application that runs in an
instance that does primarily large sequential full-volume-reads.
On this path I ran into unexpectedly poor performance within the instance.
If this is a common characteristic of OpenStack, then this becomes a
question of concern
cy?
The in-instance "dd" CPU use is ~12%. (Not very interesting.)
Not sure from where the (apparent) latency comes. The host iSCSI target?
The QEMU iSCSI initiator? Onwards...
On Tue, Mar 1, 2016 at 5:13 PM, Rick Jones wrote:
> On 03/01/2016 04:29 PM, Preston L. Bannister wro
I have need to benchmark volume-read performance of an application running
in an instance, assuming extremely fast storage.
To simulate fast storage, I have an AIO install of OpenStack, with local
flash disks. Cinder LVM volumes are striped across three flash drives (what
I have in the present set
I have need to benchmark volume-read performance of an application running
in an instance, assuming extremely fast storage.
To simulate fast storage, I have an AIO install of OpenStack, with local
flash disks. Cinder LVM volumes are striped across three flash drives (what
I have in the present set
On Wed, Feb 3, 2016 at 6:32 AM, Sam Yaple wrote:
> [snip]
>
Full backups are costly in terms of IO, storage, bandwidth and time. A full
> backup being required in a backup plan is a big problem for backups when we
> talk about volumes that are terabytes large.
>
As an incidental note...
You hav
On a side note, of the folk with interest in this thread, how many are
going to the Austin OpenStack conference? Would you be interested in
presenting as a panel?
I submitted for a presentation on "State of the Art for in-Cloud backup of
high-value Applications". Notion is to give context for the
reading the QEMU mailing list and
source code to figure out which bits were real. :)
On Tue, Feb 2, 2016 at 4:04 AM, Preston L. Bannister
wrote:
> To be clear, I work for EMC, and we are building a backup product for
> OpenStack (which at this point is very far along). The primary lac
To be clear, I work for EMC, and we are building a backup product for
OpenStack (which at this point is very far along). The primary lack is a
good means to efficiently extract changed-block information from OpenStack.
About a year ago I worked through the entire Nova/Cinder/libvirt/QEMU
stack, to
#x27;s the best for a single Company. If this vision is not shared, then,
> unfortunately, good luck competing, while if the vision is shared... let's
> do together unprecedented things.
>
> Many thanks,
> Fausto
>
>
> On Sun, Jan 31, 2016 at 1:01 AM, Preston L. Bannister <
Seems to me there are three threads here.
The Freezer folk were given a task, and did the best possible to support
backup given what OpenStack allowed. To date, OpenStack is simply not very
good at supporting backup as a service. (Apologies to the Freezer folk if I
misinterpreted.)
The patches (f
In the implementation of a instance backup service for OpenStack, on
restore I need to (re)create the restored instance in the original tenant.
Restores can be fired off by an administrator (not the original user), so
at instance-create time I have two main choices:
1. Create the instance as t
gt;
>
> On Thu, Oct 23, 2014 at 3:44 PM, Preston L. Bannister <
> pres...@bannister.us> wrote:
>
>> Yes, that is pretty much the key.
>>
>> Does LVM let you read physical blocks that have never been written? Or
>> zero out virgin segments on read? If not, t
On Thu, Oct 23, 2014 at 7:51 AM, John Griffith
wrote:
>
> On Thu, Oct 23, 2014 at 8:50 AM, John Griffith
> wrote:
>>
>> On Thu, Oct 23, 2014 at 1:30 AM, Preston L. Bannister <
>> pres...@bannister.us> wrote:
>>
>>> John,
>>>
John,
As a (new) OpenStack developer, I just discovered the
"CINDER_SECURE_DELETE" option.
As an *implicit* default, I entirely approve. Production OpenStack
installations should *absolutely* insure there is no information leakage
from one instance to the next.
As an *explicit* default, I am no
ld provide some of the same benefits.
>
>
> On 10/21/2014 02:54 PM, Preston L. Bannister wrote:
>
> As a side-note, the new AWS flavors seem to indicate that the Amazon
> infrastructure is moving to all ECS volumes (and all flash, possibly), both
> ephemeral and not. This ma
As a side-note, the new AWS flavors seem to indicate that the Amazon
infrastructure is moving to all ECS volumes (and all flash, possibly), both
ephemeral and not. This makes sense, as fewer code paths and less
interoperability complexity is a good thing.
That the same balance of concerns should a
in here. Some comments inline, but tl;dr
> my answer is "yes, we need to be doing a much better job thinking about how
> I/O intensive operations affect other things running on providers of
> compute and block storage resources"
>
> On 10/19/2014 06:41 AM, Preston L. Bannister wr
h efficient snapshots,
> use CoW operations wherever possible rather than copying full
> volumes/images, disabling wipe on delete, etc.
>
> Thanks,
> Avishay
>
> On Sun, Oct 19, 2014 at 1:41 PM, Preston L. Bannister <
> pres...@bannister.us> wrote:
>
>> OK, I am f
OK, I am fairly new here (to OpenStack). Maybe I am missing something. Or
not.
Have a DevStack, running in a VM (VirtualBox), backed by a single flash
drive (on my current generation MacBook). Could be I have something off in
my setup.
Testing nova backup - first the existing implementation, then
Too-short token expiration times are one of my concerns, in my current
exercise.
Working on a replacement for Nova backup. Basically creating backups jobs,
writing the jobs into a queue, with a background worker that reads jobs
from the queue. Tokens could expire while the jobs are in the queue (n
Tricky. First, I am new to OpenStack, and as such tend to want to shut-up
and listen.
Second, I have done APIs for distributed systems for over 30 years. Yes, I
got in very early. As such I am guilty of or saw lots of bad examples. Also
I found patterns that worked very well.
That said, the appro
Sorry, I am jumping into this without enough context, but ...
On Wed, Sep 24, 2014 at 8:37 PM, Qiming Teng
wrote:
>
> mysql> select count(*) from metadata_text;
> +--+
> | count(*) |
> +--+
> | 25249913 |
> +--+
> 1 row in set (3.83 sec)
>
There are problems where a sim
This is great. On the point of:
If an Incomplete bug has no response after 30 days it's fair game to
> close (Invalid, Opinion, Won't Fix).
How about "Stale" ... since that is where it is. (How hard to add a state?)
On Fri, Sep 19, 2014 at 6:13 AM, Sean Dague wrote:
> I've spent the better
Hi Emma,
I do not claim to be an OpenStack guru, but might know something about
backing up a vCloud.
What proposal did you have in mind? A link would be helpful.
Backing up a step (ha), the existing cinder-backup API is very close to
useless. Backup needs to apply to an active instance. The Nova
say 2) is the best option. There are many open source or
> commercial backup software, and both for VMs and volume.
> If we do option 1), it reminds me to implement something similar to VMware
> method, and it will cause nova really heavy.
>
>
> On Sun, Aug 31, 2014 at 4:04 AM
to the real backup solution.
>
>
> On Sat, Aug 30, 2014 at 1:14 PM, Preston L. Bannister <
> pres...@bannister.us> wrote:
>
>> The current "backup" APIs in OpenStack do not really make sense (and
>> apparently do not work ... which perhaps says something about usa
The second needs an API.
On Fri, Aug 29, 2014 at 11:16 AM, Jay Pipes wrote:
> On 08/29/2014 02:48 AM, Preston L. Bannister wrote:
>
>> Looking to put a proper implementation of instance backup into
>> OpenStack. Started by writing a simple set of baseline tests and running
&g
Looking to put a proper implementation of instance backup into OpenStack.
Started by writing a simple set of baseline tests and running against the
stable/icehouse branch. They failed!
https://github.com/dreadedhill-work/openstack-backup-scripts
Scripts and configuration are in the above. Simple
Did this ever go anywhere?
http://lists.openstack.org/pipermail/openstack-dev/2014-January/024315.html
Looking at what is needed to get backup working in OpenStack, and this
seems the most recent reference.
___
OpenStack-dev mailing list
OpenStack-dev@l
36 matches
Mail list logo