/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https
/~openstack
More help :
https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net
). That's
roughly the performance you can get when your storage system gets into a
'steady' state (i.e. objects # has out grown memory size). This will give
you idea of pretty much the worst case.
Jonathan Lu
On 2013/6/18 11:05, Huang Zhiteng wrote:
On Tue, Jun 18, 2013 at 10:42 AM, Jonathan
.
--
Thanks
Harry Wei
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
the availability of scheduler service higher.
thanks!
2013-02-19
Wangpan
发件人:Huang Zhiteng
发送时间:2013-02-19 10:15
主题:Re: [Openstack] [Nova] Question about multi-scheduler
收件人:Wangpanhzwang...@corp.netease.com
抄送:openstack _社区
to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack
It seems you also have tgt patch for HLFS, personally I'd prefer iSCSI
support over qEMU support since iSCSI is well supported by almost every
hypervisor.
On Jan 19, 2013 9:23 PM, harryxiyou harryxi...@gmail.com wrote:
On Sat, Jan 19, 2013 at 7:00 PM, Huang Zhiteng winsto...@gmail.com
wrote
For development efforts, it is better to use openstack-dev list instead of
this general openstack list. You can also join #openstack-cinder IRC
channel in freenode for online discussion with cinder developers.
On Jan 18, 2013 9:27 PM, harryxiyou harryxi...@gmail.com wrote:
On Fri, Jan 18, 2013
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https
tests ;-)
--
Thanks
Harry Wei
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net
(as is typically done).
I'm not saying copying is the right thing to do. I totally agree we
should avoid doing this. Fixing the slowness is also important. Oslo
core devs, please take a look at the review queue, I've patches there
for you. :)
--
Regards
Huang Zhiteng
://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post
perhaps my assumptions about why I'm seeing it
are incorrect
Thanks,
-Jon
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More
: https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang
On Wed, Oct 31, 2012 at 10:07 AM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
On Oct 30, 2012, at 7:01 PM, Huang Zhiteng winsto...@gmail.com wrote:
I'd suggest the same ratio too. But besides memory overcommitment, I
suspect this issue is also related to how KVM do memory allocation
[mailto:openstack-bounces+philip.day=hp@lists.launchpad.net] On Behalf Of
Huang Zhiteng
Sent: 10 October 2012 04:28
To: Jonathan Proulx
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] Folsom nova-scheduler race condition?
On Tue, Oct 9, 2012 at 10:52 PM, Jonathan Proulx j
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
: https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
, that worst case pause period (down time) is less than 2
minutes. My previous experience is: the down time for migrating one
idle (almost no memory access) 8GB VM via 1GbE is less than 1 second;
the down time for migrating a 8 GB VM that page got dirty really
quickly is 60 seconds. FYI.
--
Regards
Huang
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list
://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
.
Vish
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https
@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe
under netperf may not be necessary, but it should be sufficient.
happy benchmarking,
rick jones
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https
on that ?
Thanks a lot!
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
that!
Regards,
HUANG, Zhiteng
Intel SSG/SSD/SOTC/PRC Scalability Lab
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net
.
** **
Building up an open, standard and consistent set will avoid duplicate
effort as sites deploy to production and allow us to keep the monitoring up
to date when the internals of OpenStack change.
** **
Tim
** **
*From:* Huang Zhiteng [mailto:winsto...@gmail.com]
*Sent:* 09 April
: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net
: https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng
--
Regards
Huang Zhiteng
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https
. So I was
wondering if there's some kinda mechanism to limit to resource one compute
node could use, something like the 'weight' in OpenNebula.
I'm using Cactus (with GridDynamic's RHEL package), default scheduler
policy, one zone only.
Any suggestion?
--
Regards
Huang Zhiteng
37 matches
Mail list logo