I use iSCSI on our own SAN that uses drbd to replicate synchronously to
another node. We chose iSCSI because it creates 0% load on the SAN CPU and
has a low memory footprint. NFS is very popular because it allows for thin
provisioning. I benchmarked both, and while IO was about the same, the
CPUs
We are using iscsi and ocfs2. Works fine for our small setup with only 3
nodes and no special requirements regarding throughput.
El 14/03/13 16:27, Kirk Jantzer escribió:
For my testing, because of available resources, I chose local storage.
However, shared storage (like GlusterFS or the like)
apache.org
Subject: Re: What's everyone using for primary storage?
On Thu, Mar 14, 2013 at 11:27 AM, Kirk Jantzer
wrote:
> For my testing, because of available resources, I chose local
storage.
> However, shared storage (like GlusterFS or the like) is
appealing.
> What are you
gt; We use EMC VMAX.
>
>> -Original Message-
>> From: David Nalley [mailto:da...@gnsa.us]
>> Sent: Thursday, March 14, 2013 12:52 PM
>> To: cloudstack-users@incubator.apache.org
>> Subject: Re: What's everyone using for primary storage?
>>
>> On Thu,
#x27;s everyone using for primary storage?
>
> On Thu, Mar 14, 2013 at 11:27 AM, Kirk Jantzer
> wrote:
> > For my testing, because of available resources, I chose local storage.
> > However, shared storage (like GlusterFS or the like) is appealing.
> > What are you
On Thu, Mar 14, 2013 at 11:27 AM, Kirk Jantzer wrote:
> For my testing, because of available resources, I chose local storage.
> However, shared storage (like GlusterFS or the like) is appealing. What are
> you using? Why did you chose what you're using? How many instances are you
> running? etc.
On 14.03.2013 16:16, Kirk Jantzer wrote:
We've got instance deployments down to a science, so the HA isn't as
important, but it is certainly appealing.
If performance is also an issue, than all the more reason to use local
storage.
--
Sent from the Delta quadrant using Borg technology!
Nux!
We've got instance deployments down to a science, so the HA isn't as
important, but it is certainly appealing.
On Thu, Mar 14, 2013 at 12:13 PM, Nux! wrote:
> On 14.03.2013 15:27, Kirk Jantzer wrote:
>
>> For my testing, because of available resources, I chose local storage.
>> However, shared
On 14.03.2013 15:27, Kirk Jantzer wrote:
For my testing, because of available resources, I chose local storage.
However, shared storage (like GlusterFS or the like) is appealing.
What are
you using? Why did you chose what you're using? How many instances are
you
running? etc.
Thanks!!
Nothin
A FreeBSD NAS box based on ZFS, though you could also use opensolaris
On Thu, Mar 14, 2013 at 11:40 AM, Kirk Jantzer wrote:
> Thanks for the reply Matt. What are you seeing as the common platform for
> these shared storage setups?
>
>
> On Thu, Mar 14, 2013 at 11:32 AM, Mathias Mullins <
> mathi
Thanks for the reply Matt. What are you seeing as the common platform for
these shared storage setups?
On Thu, Mar 14, 2013 at 11:32 AM, Mathias Mullins <
mathias.mull...@citrix.com> wrote:
> Hey Kirk,
>
> Majority of our customers are using shared storage. It's actually a very
> low percentage
Hey Kirk,
Majority of our customers are using shared storage. It's actually a very low
percentage that are using Local storage because it basically negates all high
Availability and eliminates a lot of the useful features in CloudStack.
Local storage is usually what I refer to as "Use it and Fo
related to a single Primary Storage Volume, and as you can have multiple volumes within a Cluster, and multiple Clusters within a Zone, there are effectively no limits on how much storage a Zone can have.
In fact we always recommend at least two Primary Storage Volumes per Cluster, ideally provided by
Gaspare
The limits are really related to a single Primary Storage Volume, and as you
can have multiple volumes within a Cluster, and multiple Clusters within a
Zone, there are effectively no limits on how much storage a Zone can have.
In fact we always recommend at least two Primary Storage
, Paul Angus wrote:
Hi,
There's a nominal recommended limit of (as I recall) 8TB per primary storage
pool. But this is a finger in the air guess-timate.
The limit is really down to your infrastructure.
When we talk about the number of VMs a host can support we're looking at not
only
Hi,
There's a nominal recommended limit of (as I recall) 8TB per primary storage
pool. But this is a finger in the air guess-timate.
The limit is really down to your infrastructure.
When we talk about the number of VMs a host can support we're looking at not
only the host's
> is there a limit (in size) for the global TB of usable primary storage in
> a single CloudStack zone? We are considering to implement a SAN with at
> least 12TB of space.
>
> My current configuration is VMware vSphere 5 + cloudStack 4.0.1 .
>
> Thanks in advance,
>
> Gas
Hello everybody,
is there a limit (in size) for the global TB of usable primary storage
in a single CloudStack zone? We are considering to implement a SAN with
at least 12TB of space.
My current configuration is VMware vSphere 5 + cloudStack 4.0.1 .
Thanks in advance,
Gaspare
been re-working much of
> the underlying storage architecture within CS to allow for this type of
> configuration.
>
> Next regarding Gluster as primary storage, in my experience with it this
> hasn't ever turned out very good. Performance usually suffers and KVM and
> Clou
On 2013-02-25 12:11, Chris Mutchler wrote:
I am in the designing phase of creating a private cloud using
CloudStack.
The current portion I am focusing my efforts on is the primary
storage
design. I could use an iSCSI SAN as the primary storage, but if I do
so is
it possible for CloudStack to
since it's one of the very cool and unique features of
SolidFire (per-volume QoS). Edison has been re-working much of the underlying
storage architecture within CS to allow for this type of configuration.
Next regarding Gluster as primary storage, in my experience with it this hasn't
I am in the designing phase of creating a private cloud using CloudStack.
The current portion I am focusing my efforts on is the primary storage
design. I could use an iSCSI SAN as the primary storage, but if I do so is
it possible for CloudStack to create/use individual LUN for each VM as
opposed
On 14.02.2013 10:31, Donal Lafferty wrote:
Could some familiar with systemVM creation comment on this advice...
Hi Nux!,
I would break this into three tasks.
1. Login to an existing SSVM, and figure out what changes to the
mount command you need to make. Once you have the parameters to your
e-
> From: Nux! [mailto:n...@li.nux.ro]
> Sent: 14 February 2013 10:15
> To: cloudstack-users@incubator.apache.org
> Subject: RE: How does the SSVM work when primary storage is local?
>
> On 14.02.2013 10:01, Donal Lafferty wrote:
> > It's a bit different than what you expla
What I mean to say is that the ssvm always has sec. storage mounted on it.
Say if the user needs to create a vm from a template which doesn't already
exist on primary storage then the XS will mount the sec. storage (will
create a SR) and will copy the template from sec. storage to primary
st
On 14.02.2013 10:08, Nitin Mehta wrote:
Its the other way round. The HV mounts the sec. storage to do the
operations. This is off course for XS.
Thanks,
-Nitin
LOL.. I'm very confused. So, does the SSVM mount anything after all?
(centos6+kvm here)
Lucian
--
Sent from the Delta quadrant usin
On 14.02.2013 10:01, Donal Lafferty wrote:
It's a bit different than what you explain.
The SSVM only ever mounts secondary storage. Copying from secondary
to primary is handled at the cluster level by the hypervisor plugin.
Thanks a lot, Donal, that makes sense.
Since we're at it, could you
orm whatever is needed (moving
>snapshot around etc), but how does the SSVM function when the primary
>storage of the zone is the local disk? Can someone enlighten me here?
>
>Thanks,
>Lucian
>
>--
>Sent from the Delta quadrant using Borg technology!
>
>Nux!
>www.nux.ro
013 09:58
> To: Cloudstack users
> Subject: How does the SSVM work when primary storage is local?
>
> Hello,
>
> I understand that when NFS is used for primary and secondary, the SSVM will
> mount both on itself and perform whatever is needed (moving snapshot
> around etc), b
Hello,
I understand that when NFS is used for primary and secondary, the SSVM
will mount both on itself and perform whatever is needed (moving
snapshot around etc), but how does the SSVM function when the primary
storage of the zone is the local disk? Can someone enlighten me here?
Thanks
when I was not able to add my primary storage coz
of the NFS not being created. -
Can not create storage pool through host 3 due to Catch Exception
com.cloud.utils.exception.CloudRuntimeException, create StoragePool failed due
to com.cloud.utils.exception.CloudRuntimeException: Unable to create
Dear Pranav,
Could you please share an information if you have an experience with this
issue? And it is an indicated below;
Cannot create storage pool through host 1 due to Catch Exception
com.cloud.utils.exception. CloudRuntimeException, create StoragePool failed
due to com.cloud.utils.exc
Hi, Guangjian
Can you check the output of "nfsstat -c" on your client and "nfsstat
-s" on your nfs server?
The output shows the nfs version mismatch, you can mount nfs using
another version (maybe 4).
There are some other services need to run before nfs server starts,
like rpcbind, portmap, nfs-ker
Anybody use solution CS 3.0.2 + (XenServer 6.0.2 with Gluster) for share
primary storage. Here Gluster installed in XenServer.
When I use CS 3.0.2 + (XenServer 6.0.2 with Gluster) for share primary
storage, I got error message as below, anybody have solution about it?
1. use mount command in CS
In theory, CS works with the VMware and its attached lun as primary
storage with option 'presetup' to setup primary storage for cluster.
Even if CS recommanded no any vm exists before adding into CS
-- Jerry
- Original Message -
From: "曹伟"
To: cloudstack-users@
hi all,
I have a fcsan lun have been used by an exist VMware vSphere platform as a
datastore, and there are lots of available space in this datastore, so I want
to use these space and make it as the primary storage of cloudstack.
So I want to know if Cloudstack permit to use an exist vmware
Hi Prakash,
I think the you are trying to create primary storage which is already created
earlier on the host.
Can you destroy the primary storage from XenCenter and try again or give new
path for the primary storage.
Thanks,
Jayapal
> -Original Message-
> From: banuka p
* Tuesday, December 11, 2012 2:26 AM
> *To:* Kevin Kluge; Kevin Kluge
> *Subject:* Error while trying to add primary storage
>
> ** **
>
> Hello sir:
>
> ** **
>
> I'm receiving the following error message while trying to add primary
> storage:
>
Hi, All
Has somebody used Ceph RBD in CloudStack as primary storage? I see that in
the new features of CS 4.0, RBD is supported for KVM. So I tried using RBD
as primary storage but met with some problems.
I use a CentOS6.3 server as host. First I erase the qemu-kvm(0.12.1) and
libvirt(0.9.10
have 500 000 Iops
and 4Gb
of throughput...
Regards
-Message d'origine-
De : bruc...@v365.com.au <mailto:bruc...@v365.com.au>
[mailto:bruc...@v365.com.au <http://v365.com.au>]
Envoyé : mercredi 24 octobre 2012 06:04
À : cloudstack-users@incubator.apache.org
<mailto:cloudstack-user
365.com.au]
Envoyé : mercredi 24 octobre 2012 06:04
À : cloudstack-users@incubator.apache.org
Objet : Re: Primary Storage - DATA-ROT IS MORE TO WORRY ABOUT
These Sans we have build have saved us a fortune $ compared to FC Block I/O
Sans from IBM, HP etc
Ive been a contractor for IBM and H
> > On Tue, Oct 23, 2012 at
>> >> 7:56 AM,
Andreas Huser wrote:
>> >> >
>> >>
>> >> Hi Fabrice,
>> >> >>
>> >> >> i
know
>> >> OpenSolaris/Solaris Oracle it's so
>> a thing.
>> >
ill.
> >> >> And you can use the full premier Support
> >> from
> Oracle.
> >> >> Nexenta develop with the Illumos code. And the Licence
> >>
> are TB based.
> >> >> That is not my favorite. As well the pool version
> from
> >&g
will. Everyone
must decide for
>> themselves.
>> >>
>> >> SRP Targets or iser are not
difficult to configure.
>> Use the SRP for
>> >> the Storage unit
connection. Solaris and GlusterFS
>> builds one Storage unit.
>> >> The
GlusterF
ll KVM,
> >> VMWare, Hyper-V etc.
> >> You can use
> native GlusterFS, RDMA, NFS ore CIFS to export the Volume.
> >> SRP have
> nothing to do with VmWare.
> >>
> >> When you are use a 7200 SAS drive the
> access time are the same as a
> >&
ter. When you need
>> Performance you must use SAS
drives with 15000U/m. But it's not needed
>> when you install SSD for
ZIL/L2ARC. ZeusRAM rocks :-)
>>
>> I use dedup only at secondary stroage
or on Backupserver not on Primary
>> Storage.
>> When you use SSD
user [mailto:ahu...@7five-edv.de]
Envoyé : mardi 23 octobre 2012 14:56
À : cloudstack-users@incubator.apache.org
Objet : Re: Primary Storage
Hi Fabrice,
i know OpenSolaris/Solaris Oracle it's so a thing.
I'm for more then 10 years a open source user and that with oracle - i did
no like
gt; SATA drive only the quality of the hardware are better. When you need
> Performance you must use SAS drives with 15000U/m. But it's not needed
> when you install SSD for ZIL/L2ARC. ZeusRAM rocks :-)
>
> I use dedup only at secondary stroage or on Backupserver not on Primary
>
t it's not needed
when you install SSD for ZIL/L2ARC. ZeusRAM rocks :-)
I use dedup only at secondary stroage or on Backupserver not on Primary Storage.
When you use SSD SATA drives then you have an cheap an fast strorage.
1TB drive cost unter 100$. Current i'm not need to save st
or free, i try one of is competitor (onefs) and clearly
clustered filesystem are the futur.
Cheers,
Fabrice
-Message d'origine-
De : Andreas Huser [mailto:ahu...@7five-edv.de]
Envoyé : mardi 23 octobre 2012 11:37
À : cloudstack-users@incubator.apache.org
Objet : Re: Primary Storage
Server a SSD write cache. ZFS works at default with sync=standard that prevent
write holes. (COW System)
I hope that I could help a little
Greeting from Germany
Andreas
- Ursprüngliche Mail -
Von: "Fabrice Brazier"
An: cloudstack-users@incubator.apache.org
Gesendet: Dienstag,
cluster in the middle must impact the
overral
performance no?
Fabrice
-Message d'origine-
De : Andreas Huser [mailto:ahu...@7five-edv.de]
Envoyé : mardi 23 octobre 2012 05:40
À : cloudstack-users@incubator.apache.org
Objet : Re: Primary Storage
Hi,
for Cloudstack i use Solaris 1
backup policy is vmware snapshot daily, archive daily using Veeam
backup to SRP SAN. And DR backup to archive daily
as our hypervisors
are vmware we can snap into the primary storage pool.
we decided to let
VMware manage the storage for each vm, with multiple 2TB Luns presented
(6 x 2TB) we have 6 Luns w
Hi Bryan
what kind of speed are you getting with IPOIB? to via GFS
to ZFS.
QDR or DDR? 2044MTU or 64000 ?
What speeds do you get with ZFS,
what sort of ZFS volume do you have? how many discs?
Cheers
Bruce M
On
23.10.2012 14:53, Bryan Whitehead wrote:
>> yes we had the same prob on
GFS 3.3 o
> yes we had the same prob on GFS 3.3 on Ubuntu 10.04 using SRP
> we could not get SRP to work well, very inconstant, always drops or
> fails. Even IPOIB was faulty. we also tried 12.04, same deal.
I currently run glusterfs on IPoIB using CentOS 6.3 using XFS as the
native storage (as recommended
Hi
yes we had the same prob on GFS 3.3 on Ubuntu 10.04 using SRP
we could not get SRP to work well, very inconstant, always drops or
fails. Even IPOIB was faulty. we also tried 12.04, same deal.
We didn't
get to the point of using ZFS with GFS, as we tried it on OpenIndiana
and it was very sl
> I have installed GlusterFS direct on Solaris with a modified code.
> Want you build bigger systems for more then 50 VMs it is better you split
> the Solaris and GlusterFS with a separte headnode for GlusterFS
>
> That looks like:
> Solaris ZFS Backendstorage with a dataset Volume (Thin Provisio
best performance and most scalable Storage.
>> I have
> tasted some different solutions for primary Storage but the
>> most are
> to expensive and for a CloudStack Cluster not economic or
>> have a poor
> performance.
>>
>> My Configuration:
>> Storag
se Solaris 11 ZFS + GlusterFS over Infiniband
>> (RDMA).
> That gives the best performance and most scalable Storage.
>> I have
> tasted some different solutions for primary Storage but the
>> most are
> to expensive and for a CloudStack Cluster not economic or
>> h
;>
>>>> RBD looks interesting but I'm not sure if I would be willing to
put
>>>> production data on it, I'm not sure how performant it is IRL.
From a
>>>> purely technical perspective, it looks REALLY cool.
>>>>
>>>> I suppo
Huser wrote:
> Hi,
>
> for
Cloudstack i use Solaris 11 ZFS + GlusterFS over Infiniband
> (RDMA).
That gives the best performance and most scalable Storage.
> I have
tasted some different solutions for primary Storage but the
> most are
to expensive and for a CloudStack Cluster not ec
Hi,
for Cloudstack i use Solaris 11 ZFS + GlusterFS over Infiniband (RDMA). That
gives the best performance and most scalable Storage.
I have tasted some different solutions for primary Storage but the most are to
expensive and for a CloudStack Cluster not economic or have a poor performance
>> they
>> >>>> really don't scale out well if you are looking for something with a
>> >>>> unified
>> >>>> name space. I'll say however that ZFS is a battle hardened FS with
>> tons
>> >>>> of
>>
hiz-bang SSD+SATA disk SAN things these
> >>>> smaller start up companies are hocking are just ZFS appliances.
> >>>>
> >>>> RBD looks interesting but I'm not sure if I would be willing to put
> >>>> production data on it, I'm not sure how perf
willing to put
>>>> production data on it, I'm not sure how performant it is IRL. From a
>>>> purely technical perspective, it looks REALLY cool.
>>>>
>>>> I suppose anything is fast if you put SSDs in it :) GlusterFS is another
>>>> op
rmant it is IRL. From a
>>> purely technical perspective, it looks REALLY cool.
>>>
>>> I suppose anything is fast if you put SSDs in it :) GlusterFS is another
>>> option although historically small/random IO has not been it's strong
>>> point.
>&
nding money on software and want a scale out block
storage
then you might want to consider HP LeftHand's VSA product. I am personally
partial to NFS plays:) I went the exact opposite approach and settled on
Isilon for our primary storage for our CS deployment.
On Mon, Oct 22, 2012 at 10:24 AM,
not been it's strong point.If you are ok spending money on software and want a scale out block storagethen you might want to consider HP LeftHand's VSA product. I am personallypartial to NFS plays:) I went the exact opposite approach and settled onIsilon for our primary storage for our CS depl
to consider HP LeftHand's VSA product. I am personally
partial to NFS plays:) I went the exact opposite approach and settled on
Isilon for our primary storage for our CS deployment.
On Mon, Oct 22, 2012 at 10:24 AM, Nik Martin wrote:
> On 10/22/2012 10:16 AM, Trevor Francis wrote:
> We are looking at building a Primary Storage solution for an
> enterprise/carrier class application. However, we want to build it using a
> FOSS solution and not a commercial solution. Do you have a recommendation
> on platform?
>
>
I'm using GlusterFS as primary s
On 10/22/2012 10:16 AM, Trevor Francis wrote:
We are looking at building a Primary Storage solution for an
enterprise/carrier class application. However, we want to build it using
a FOSS solution and not a commercial solution. Do you have a
recommendation on platform?
Trevor,
I got EXCELLENT
On Mon, Oct 22, 2012 at 11:16 AM, Trevor Francis <
trevor.fran...@tgrahamcapital.com> wrote:
> We are looking at building a Primary Storage solution for an
> enterprise/carrier class application. However, we want to build it using a
> FOSS solution and not a commercial solution
We are looking at building a Primary Storage solution for an enterprise/carrier class application. However, we want to build it using a FOSS solution and not a commercial solution. Do you have a recommendation on platform?We are really interested at putting in a caching layer (SSD) in front of
ement" from there.
> -Message d'origine-
> De : Chiradeep Vittal [mailto:chiradeep.vit...@citrix.com]
> Envoyé : dimanche 21 octobre 2012 06:57
> À : CloudStack Users
> Objet : Re: Template deletion on the primary storage
>
> Could you raise a enhancement
imary storage
Could you raise a enhancement request?
On 10/19/12 2:15 PM, "Tamas Monos" wrote:
>Hi,
>
>I'm not sure you can at all.
>I think CloudStack uses linked-in clones for virtual disks so the
>template must stay on the primary storage otherwise will render
Could you raise a enhancement request?
On 10/19/12 2:15 PM, "Tamas Monos" wrote:
>Hi,
>
>I'm not sure you can at all.
>I think CloudStack uses linked-in clones for virtual disks so the
>template must stay on the primary storage otherwise will render the VM
>unu
Hi,
I'm not sure you can at all.
I think CloudStack uses linked-in clones for virtual disks so the template must
stay on the primary storage otherwise will render the VM unusable.
All VMs based on the same template will use the same shared linked-in cloned
virtual disks.
I hope in the f
Hi Nitin,
thanks for your reply. But I don"t want to keep the template on the primary
storage after 1 hour.
I have a lot of template in CloudStack, around one template for one VM. If
I have 20 VM, I have 20 templates on CloudStack. So I don"t want to use
twice space on the primary stor
The templates do get cleaned up if there is no active volume referring to it on
the primary storage. There is a storage clean up thread
running which cleans them up. Check for storage.cleanup.interval in the global
settings to see how often it runs.
Btw for XS case we just keep 1 copy of the
Hi Folks,
When you create a VM from a template, the template is deployed on the
primary storage and the VM is created from this template then.
The template on the primary storage is not deleted and takes space. I
guess, the purpose is to save time when you deploy the same template on the
same
find joinned the copy screen :
[image: Inline images 1]
I don't understand, why the error message refere to secondary storage.
"Failed to copy the volume from secondary storage to the destination
primary storage pool."
regards
On 17 September 2012 11:40, claude bariot wrote:
> >What is the best way for doing the disk migratiation between 2 primary
> >storage ?
> >
> >regards
> >
> >On 14 September 2012 18:23, Ahmad Emneina
> >wrote:
> >
> >> You need to enable the original primary storage since that¹s where the
>
ike a "+" with arrows shooting out of each end, and clicking that should
pop up a dialog prompting you for the storage you want to move the vm to.
On 9/14/12 9:34 AM, "claude bariot" wrote:
>Thank so much for your explaination.
>What is the best way for doing the disk migratia
Thank so much for your explaination.
What is the best way for doing the disk migratiation between 2 primary
storage ?
regards
On 14 September 2012 18:23, Ahmad Emneina wrote:
> You need to enable the original primary storage since that’s where the vm
> volumes are on. don’t power on th
You need to enable the original primary storage since that’s where the vm
volumes are on. don’t power on the vm's but find their volumes, and volume
migrate them to the new primary storage. After you migrated them all off, you
can power them on and enable maintenance on the storage you
I was tried an another test :
I have 3 net primary storage and 2 local primary storage
When I enable the first net primary storage in maintenace mode, all System
VMs migrate to another net primary storage automaticaly...
Bat, all Vms don't migrate on the another primary storage as Syste
-2-24-VM]
> > 2012-09-14 13:50:28,270 DEBUG [cloud.async.AsyncJobManagerImpl]
> > (Job-Executor-47:job-77) Complete async job-77, jobStatus: 2, resultCode:
> > 530, result: com.cl
> > ud.api.response.ExceptionResponse@75cb722f
> > 2012-09-14 13:50:31,787 DEBUG [cloud.async.Async
completed
>
>
>
>
> On 14 September 2012 13:46, claude bariot wrote:
>
>> Yep.
>> The Storage VM system has been restarted into the available primary stoge
>> fine.
>>
>> Bat, I would like know, how can I do for using my other available PS ?
>> regar
orrectly, this is by design. Maintenance is used for
>> scenarios like you want to power off primary storage and replace hardware
>> chips in it.
>>
>> When you maintain a primary storage, system VMs and vrouter associated
>> get restarted on other available PS.
>
s like you want to power off primary storage and replace hardware
> chips in it.
>
> When you maintain a primary storage, system VMs and vrouter associated get
> restarted on other available PS.
> User VMs will just stop.
>
> Regards
> Mice
>
> -Original Message-
If I recall correctly, this is by design. Maintenance is used for scenarios
like you want to power off primary storage and replace hardware chips in it.
When you maintain a primary storage, system VMs and vrouter associated get
restarted on other available PS.
User VMs will just stop.
Regards
:
530, result: com.cloi.response.ExceptionResponse@ce3c31f
2012-09-14 09:53:30,614 DEBUG [cloud.async.AsyncJobManagerImpl]
(catalina-exec-11:null) Async job-76 completed
On 14 September 2012 10:09, claude bariot wrote:
> Ok.
> Now a have 2 primary storage in may CS palteforme :
> 1 in nf
Ok.
Now a have 2 primary storage in may CS palteforme :
1 in nfs share (older and running fine)
1 iscsi target
problem :
- When I enable "maintenance mode) for the " nfsshare primary storage" I
sow following :
. all system VMs disk migrate automaticaly to the "iscsi share&q
>- set node.startup to automatic in /etc/iscsi/iscsid.conf ?
>- connect to the target ? or CS will be connect automaticaly after I add a
>primary storage from UI ?
>- login manualy to the Lun target
>- makle the fdisl for partinionning the new disk (Lun)
>- format the disk e
I was added an additional primary storage (using CS UI). with the following
detail :
*Name*: cloud-primary
*Type*: IscsiLUN*Path*: /iqn.2012-09.com.openfiler:primay-st/0
I would like know if I should doing the following operation to the
Management server :
- set node.startup to automatic in
lesce
process kicked in and saved a bunch of space, but the next round of
CloudStack snapshots left them on XenServer primary storage.
It's a huge PITA.
-Original Message-
From: Nik Martin
Organization: Nfina Technologies, Inc.
Reply-To: "cloudstack-users@incubator.apache.org&quo
I don't have the original Message from James Kahn, but the subject is
correct.
You may be running into this:
http://support.citrix.com/article/CTX123400
That article is the reason many people dump Xenserver/Citrix. It's a
major pain. I don't know for sure if it exists in 6.02, but you might
is copied to secondary storage
- Snapshot operation ends, XS snapshot (GUID_ROOT-1234_timestampA) remains
on primary storage
Performing a second snapshot operation does the following:
- Snapshot disk
- Snapshot process creates XS snapshot (GUID_ROOT-1234_timestampA)
- Snapshot is copied to second
August 23, 2012 3:47 AM
> To: cloudstack-users@incubator.apache.org
> Subject: CloudStack and XenServer 6.0.2 - stray snapshots on primary
> storage
>
> Stray CloudStack generated snapshots on primary storage are causing
> significant storage use on our XenServer environment. Is th
Stray CloudStack generated snapshots on primary storage are causing
significant storage use on our XenServer environment. Is this expected
behaviour, a bug, or are we encountering an environmental issue? Is
anybody else seeing this?
One particular storage volume has over 1TB in use, with 659GB
1 - 100 of 101 matches
Mail list logo