[Openstack] [OT] looking for team leader of private cloud

2017-09-06 Thread 风河
Hi,

We (huya.com) are looking for a team leader for dev/ops our private cloud 
platform.

If you know cloud computing well, master the knowledge related to openstack, 
that will be great.

You should be bachelor of CS. Work location is Guangzhou or Zhuhai city, China.

If anybody has the interest please PM me. Thanks.

regards.___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack] Can Mitaka RamFilter support free hugepages?

2017-09-06 Thread Jay Pipes

Sahid, Stephen, what are your thoughts on this?

On 09/06/2017 10:17 PM, Yaguang Tang wrote:
I think the fact that RamFilter can't deal with huge pages is a bug , 
duo to this limit, we have to set a balance  between normal memory and 
huge pages to use RamFilter and NUMATopologyFilter. what do you think Jay?



On Wed, Sep 6, 2017 at 9:22 PM, Jay Pipes > wrote:


On 09/06/2017 01:21 AM, Weichih Lu wrote:

Thanks for your response.

Is this mean if I want to create an instance with flavor: 16G
memory (hw:mem_page_size=large), I need to preserve memory more
than 16GB ?
This instance consume hugepages resource.


You need to reserve fewer 1GB huge pages than 50 if you want to
launch a 16GB instance on a host with 64GB of RAM. Try reserving 32
1GB huge pages.

Best,
-jay

2017-09-06 1:47 GMT+08:00 Jay Pipes mailto:jaypi...@gmail.com> >>:


 Please remember to add a topic [nova] marker to your
subject line.
 Answer below.

 On 09/05/2017 04:45 AM, Weichih Lu wrote:

 Dear all,

 I have a compute node with 64GB ram. And I set 50
hugepages wiht
 1GB hugepage size. I used command "free", it shows free
memory
 is about 12GB. And free hugepages is 50.


 Correct. By assigning hugepages, you use the memory
allocated to the
 hugepages.

 Then I launch an instance with 16GB memory, set flavor
tag :
 hw:mem_page_size=large. It show Error: No valid host
was found.
 There are not enough hosts available.


 Right, because you have only 12G of RAM available after
 creating/allocating 50G out of your 64G.

 Huge pages are entirely separate from the normal memory that a
 flavor consumes. The 16GB memory in your flavor is RAM
consumed on
 the host. The huge pages are individual things that are
consumed by
 the NUMA topology that your instance will take. RAM != huge
pages.
 Totally different things.

   And I check nova-scheduler log. My

 compute is removed by RamFilter. I can launch an
instance with
 8GB memory successfully, or I can launch an instance
with 16GB
 memory sucessfully by remove RamFilter.


 That's because RamFilter doesn't deal with huge pages.
Because huge
 pages are a different resource than memory. The page itself
is the
 resource.

 The NUMATopologyFilter is the scheduler filter that
evaluates the
 huge page resources on a compute host and determines if the
there
 are enough *pages* available for the instance. Note that I say
 *pages* because the unit of resource consumption for huge
pages is
 not MB of RAM. It's a single memory page.

 Please read this excellent article by Steve Gordon for
information
 on what NUMA and huge pages are and how to use them in Nova:


http://redhatstackblog.redhat.com/2015/09/15/driving-in-the-fast-lane-huge-page-support-in-openstack-compute/



>

 Best,
 -jay

 Does RamFilter only check free memory but not free
hugepages?
 How can I solve this problem?

 I use openstack mitaka version.

 thanks

 WeiChih, Lu.

 Best Regards.


 ___
 Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


>
 Post to : openstack@lists.openstack.org

 >
 Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




Re: [Openstack] [OpenStack] Can Mitaka RamFilter support free hugepages?

2017-09-06 Thread Yaguang Tang
I think the fact that RamFilter can't deal with huge pages is a bug , duo
to this limit, we have to set a balance  between normal memory and huge
pages to use RamFilter and NUMATopologyFilter. what do you think Jay?


On Wed, Sep 6, 2017 at 9:22 PM, Jay Pipes  wrote:

> On 09/06/2017 01:21 AM, Weichih Lu wrote:
>
>> Thanks for your response.
>>
>> Is this mean if I want to create an instance with flavor: 16G memory
>> (hw:mem_page_size=large), I need to preserve memory more than 16GB ?
>> This instance consume hugepages resource.
>>
>
> You need to reserve fewer 1GB huge pages than 50 if you want to launch a
> 16GB instance on a host with 64GB of RAM. Try reserving 32 1GB huge pages.
>
> Best,
> -jay
>
> 2017-09-06 1:47 GMT+08:00 Jay Pipes > jaypi...@gmail.com>>:
>>
>>
>> Please remember to add a topic [nova] marker to your subject line.
>> Answer below.
>>
>> On 09/05/2017 04:45 AM, Weichih Lu wrote:
>>
>> Dear all,
>>
>> I have a compute node with 64GB ram. And I set 50 hugepages wiht
>> 1GB hugepage size. I used command "free", it shows free memory
>> is about 12GB. And free hugepages is 50.
>>
>>
>> Correct. By assigning hugepages, you use the memory allocated to the
>> hugepages.
>>
>> Then I launch an instance with 16GB memory, set flavor tag :
>> hw:mem_page_size=large. It show Error: No valid host was found.
>> There are not enough hosts available.
>>
>>
>> Right, because you have only 12G of RAM available after
>> creating/allocating 50G out of your 64G.
>>
>> Huge pages are entirely separate from the normal memory that a
>> flavor consumes. The 16GB memory in your flavor is RAM consumed on
>> the host. The huge pages are individual things that are consumed by
>> the NUMA topology that your instance will take. RAM != huge pages.
>> Totally different things.
>>
>>   And I check nova-scheduler log. My
>>
>> compute is removed by RamFilter. I can launch an instance with
>> 8GB memory successfully, or I can launch an instance with 16GB
>> memory sucessfully by remove RamFilter.
>>
>>
>> That's because RamFilter doesn't deal with huge pages. Because huge
>> pages are a different resource than memory. The page itself is the
>> resource.
>>
>> The NUMATopologyFilter is the scheduler filter that evaluates the
>> huge page resources on a compute host and determines if the there
>> are enough *pages* available for the instance. Note that I say
>> *pages* because the unit of resource consumption for huge pages is
>> not MB of RAM. It's a single memory page.
>>
>> Please read this excellent article by Steve Gordon for information
>> on what NUMA and huge pages are and how to use them in Nova:
>>
>> http://redhatstackblog.redhat.com/2015/09/15/driving-in-the-
>> fast-lane-huge-page-support-in-openstack-compute/
>> > the-fast-lane-huge-page-support-in-openstack-compute/>
>>
>> Best,
>> -jay
>>
>> Does RamFilter only check free memory but not free hugepages?
>> How can I solve this problem?
>>
>> I use openstack mitaka version.
>>
>> thanks
>>
>> WeiChih, Lu.
>>
>> Best Regards.
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
>> Post to : openstack@lists.openstack.org
>> 
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
>> Post to : openstack@lists.openstack.org
>> 
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> 
>>
>>
>>
>>
>> ___
>> Mailing list: http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>>
>>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
>



-- 
Tang Yaguang
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/open

Re: [Openstack] [PTG] Video interviews at PTG, now scheduling

2017-09-06 Thread Rich Bowen
Just a reminder. Slots are starring to fill up.

On Tue, Aug 22, 2017, 10:43 Rich Bowen  wrote:

> Now that the PTG schedule is up, I'd like to invite you to sign up for
> my video interview series. I'll be conducting interviews at the PTG,
> with the goal of:
>
> * Telling our users what's new in Pike, and what to expect in Queens
> * Put a human face on the upstream developer community
> * Show the awesome cooperation and collaboration between projects and
> between companies
>
> Sign up at:
>
> https://docs.google.com/spreadsheets/d/1KNHuo9Yb5kbjZAYGQ_PAo-YFndD8QTdaKzaPoct_aaU/edit?usp=sharing
>
> A few tips:
>
> * Talk with your project about who should do the interview, and what you
> want to highlight.
> * Consider having several people sign up to do an interview (no more
> than 3, please)
> * Consider doing an interview that is cross-project, to talk about this
> collaboration
> * If you know of a user/customer who will be at the PTG who has a cool
> use-case, encourage them to sign up
> * Please read the notes on the "Planning for your interview" tab of the
> spreadsheet.
>
> If you'd like to see examples of what I'm going for, I did this on a
> much smaller scale in Atlanta, with just Red Hat engineers. Please see:
>
> https://www.youtube.com/watch?v=5kT-Sv3rkTw&list=PLOuHvpVx7kYksG0NFaCaQsSkrUlj3Oq4S
>
> Thanks!
>
> --
> Rich Bowen - rbo...@redhat.com
> RDO Community Liaison
> http://rdoproject.org
> @RDOCommunity
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Fuel] storage question. (Fuel 10 Newton deploy with storage nodes)

2017-09-06 Thread Jim Okken
thanks for the help once again Eddie!

im sure you remember i have that fiber channel SAN configuration


This system has a 460GB disk mapped to it from the fiber channel SAN. As
far as I can tell this disk isn't much different to the OS than a local
SATA drive.
There is also a internal 32GB USB/Flash drive in this system which isn't
even shown in the Fuel 10 GUI

In the bootstrap OS I see:

ls /dev/disk/by-path:
pci-:00:14.0-usb-0:3.1:1.0-scsi-0:0:0:0
pci-:09:00.0-fc-0x247000c0ff25ce6d-lun-12
pci-:09:00.0-fc-0x207000c0ff25ce6d-lun-12

both those xxx-lun-12 devices are the same drive.


I also see one /dev/dm-X device
 lsblk /dev/dm-0
NAME  MAJ:MIN RM   SIZE RO TYPE
 MOUNTPOINT
3600c0ff0001ea00f5d1fa4590100 252:00 429.3G  0 mpath


there are 3 /dev/sdX devices

1.
(parted) select /dev/sda
Using /dev/sda
(parted) print
Model: HP iLO Internal SD-CARD (scsi)
Disk /dev/sdd: 32.1GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number  Start  End  Size  Type  File system  Flags


2.
(parted) select /dev/sdb
Using /dev/sdb
(parted) print
Error: /dev/sdb: unrecognised disk label
Model: HP MSA 2040 SAN (scsi)
Disk /dev/sdb: 461GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:


3.
(parted) select /dev/sdc
Using /dev/sdc
(parted) print
Error: /dev/sdc: unrecognised disk label
Model: HP MSA 2040 SAN (scsi)
Disk /dev/sdc: 461GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:


dev sdb and sdc are the same disk.




I see this bug, but wouldn't know how to even start applying a patch if it
apply to my situation.
https://bugs.launchpad.net/fuel/+bug/1652788


thanks!

-- Jim

On Mon, Sep 4, 2017 at 2:34 AM, Eddie Yen  wrote:

> Hi
>
> Can you describe your disk configuration and partitioning?
>
> 2017-09-02 4:57 GMT+08:00 Jim Okken :
>
>> Hi all,
>>
>>
>>
>> Can you offer and insight in this failure I get when deploying 2 compute
>> nodes using Fuel 10, please? (contoller etc nodes are all deployed/working)
>>
>>
>>
>> fuel_agent.cmd.agent PartitionNotFoundError: Partition
>> /dev/mapper/3600c0ff0001ea00f521fa4590100-part2 not found after
>> creation fuel_agent.cmd.agent [-] Partition 
>> /dev/mapper/3600c0ff0001ea00f521fa4590100-part2
>> not found after creation
>>
>>
>>
>>
>>
>> ls -al /dev/mapper
>>
>> 600c0ff0001ea00f521fa4590100 -> ../dm-0
>>
>> 600c0ff0001ea00f521fa4590100-part1 -> ../dm-1
>>
>> 600c0ff0001ea00f521fa4590100p2 -> ../dm-2
>>
>>
>>
>> Why the 2nd partition was created and actually named "...000p2" rather
>> than "...000-part2" is beyond me.
>>
>>
>>
>>  More logging if it helps, lots of failures:
>>
>>
>>
>> 2017-09-01 18:42:32ERRpuppet-user[3642]:  /bin/bash
>> "/etc/puppet/shell_manifests/provision_56_command.sh" returned 255
>> instead of one of [0]
>>
>> 2017-09-01 18:42:32NOTICE puppet-user[3642]:
>> (/Stage[main]/Main/Exec[provision_56_shell]/returns) Partition
>> /dev/mapper/3600c0ff0001ea00f5d1fa4590100-part2 not found after
>> creation
>>
>> 2017-09-01 18:42:32NOTICE puppet-user[3642]:
>> (/Stage[main]/Main/Exec[provision_56_shell]/returns) Unexpected error
>>
>> 2017-09-01 18:42:32NOTICE puppet-user[3642]:
>> (/Stage[main]/Main/Exec[provision_56_shell]/returns) /bin/bash: warning:
>> setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
>>
>> 2017-09-01 18:42:31WARNING   systemd-udevd[4982]:
>> Process '/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.
>>
>> 2017-09-01 18:42:31INFO  multipathd[1012]:  dm-3: remove map
>> (uevent)
>>
>> 2017-09-01 18:42:31WARNING   systemd-udevd[4964]:
>> Process '/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.
>>
>> 2017-09-01 18:42:31WARNING   systemd-udevd[4963]:
>> Process '/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.
>>
>> 2017-09-01 18:42:31ERRmultipath:  /dev/sda: can't store
>> path info
>>
>> 2017-09-01 18:42:30WARNING   systemd-udevd[4889]:
>> Process '/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.
>>
>> 2017-09-01 18:42:29INFO  multipathd[1012]:  dm-3: remove map
>> (uevent)
>>
>> 2017-09-01 18:42:29WARNING   systemd-udevd[4866]:
>> Process '/usr/bin/partx -d --nr 1-1024 /dev/sdb' failed with exit code 1.
>>
>> 2017-09-01 18:42:29WARNING   systemd-udevd[4867]:
>> Process '/usr/bin/partx -d --nr 1-1024 /dev/sdc' failed with exit code 1.
>>
>> 2017-09-01 18:42:29ERRmultipath:  /dev/sda: can't store
>> path info
>>
>> 2017-09-01 18:42:28WARNING   systemd-udevd[4791]:
>> Process '/sbin/kpartx -u -p -part /dev/dm-0' failed with exit code 1.
>>
>> 2017-09-01 18:42:28INFO  multipathd[1012]:  dm-3: remove map
>> (uevent)
>>
>> 2017-09-01 18:42:28WARNING   systemd-udevd[4773]:
>> Proce

[Openstack] Devstack installation failed - Newton - libvirt-python error

2017-09-06 Thread Silvia Fichera
Hi all,
since I need to use the networking-onos module and since I know that Mitaka
is now EOL and since it does not work with Ocata (that I've successfully
installed), I'm trying to downgrade to Newton. So I've unstacked removed
the devstack Ocata folder, remove all the files related to the previous
installation (dist-packages, site-packages, /opt/stack, etc) and I've
cloned the Newton branch, created the new local.conf file and stacked.
Now I have this error related to libvirt:

Collecting libvirt-python===2.1.0 (from -c
/opt/stack/requirements/upper-constraints.txt (line 169))
  Using cached libvirt-python-2.1.0.tar.gz
Building wheels for collected packages: libvirt-python
  Running setup.py bdist_wheel for libvirt-python ... error
  Complete output from command /usr/bin/python -u -c "import setuptools,
tokenize;__file__='/tmp/pip-build-QIxCV3/libvirt-python/setup.py';f=getattr(tokenize,
'open', open)(__file__);code=f.read().replace('\r\n',
'\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d
/tmp/tmpy6bFGcpip-wheel- --python-tag cp27:
  running bdist_wheel
  running build
  /usr/bin/pkg-config --print-errors --atleast-version=0.9.11 libvirt
  /usr/bin/python generator.py libvirt
/usr/share/libvirt/api/libvirt-api.xml
  Found 415 functions in /usr/share/libvirt/api/libvirt-api.xml
  Found 0 functions in libvirt-override-api.xml
  Generated 344 wrapper functions
  Missing type converters:
  virConnectNodeDeviceEventGenericCallback:1
  ERROR: failed virConnectNodeDeviceEventRegisterAny
  error: command '/usr/bin/python' failed with exit status 1

  
  Failed building wheel for libvirt-python
  Running setup.py clean for libvirt-python
Failed to build libvirt-python
Installing collected packages: libvirt-python
  Found existing installation: libvirt-python 3.0.0
DEPRECATION: Uninstalling a distutils installed project
(libvirt-python) has been deprecated and will be removed in a future
version. This is due to the fact that uninstalling a distutils project will
only partially uninstall the project.
Uninstalling libvirt-python-3.0.0:
  Successfully uninstalled libvirt-python-3.0.0
  Running setup.py install for libvirt-python ... error
Complete output from command /usr/bin/python -u -c "import setuptools,
tokenize;__file__='/tmp/pip-build-QIxCV3/libvirt-python/setup.py';f=getattr(tokenize,
'open', open)(__file__);code=f.read().replace('\r\n',
'\n');f.close();exec(compile(code, __file__, 'exec'))" install --record
/tmp/pip-NPU6Mp-record/install-record.txt
--single-version-externally-managed --compile:
running install
running build
/usr/bin/pkg-config --print-errors --atleast-version=0.9.11 libvirt
/usr/bin/python generator.py libvirt
/usr/share/libvirt/api/libvirt-api.xml
Found 415 functions in /usr/share/libvirt/api/libvirt-api.xml
Found 0 functions in libvirt-override-api.xml
Generated 344 wrapper functions
Missing type converters:
virConnectNodeDeviceEventGenericCallback:1
ERROR: failed virConnectNodeDeviceEventRegisterAny
error: command '/usr/bin/python' failed with exit status 1


  Rolling back uninstall of libvirt-python
Command "/usr/bin/python -u -c "import setuptools,
tokenize;__file__='/tmp/pip-build-QIxCV3/libvirt-python/setup.py';f=getattr(tokenize,
'open', open)(__file__);code=f.read().replace('\r\n',
'\n');f.close();exec(compile(code, __file__, 'exec'))" install --record
/tmp/pip-NPU6Mp-record/install-record.txt
--single-version-externally-managed --compile" failed with error code 1 in
/tmp/pip-build-QIxCV3/libvirt-python/
+inc/python:pip_install:1  exit_trap
+./stack.sh:exit_trap:494  local r=1
++./stack.sh:exit_trap:495  jobs -p
+./stack.sh:exit_trap:495  jobs=
+./stack.sh:exit_trap:498  [[ -n '' ]]
+./stack.sh:exit_trap:504  kill_spinner
+./stack.sh:kill_spinner:390   '[' '!' -z '' ']'
+./stack.sh:exit_trap:506  [[ 1 -ne 0 ]]
+./stack.sh:exit_trap:507  echo 'Error on exit'
Error on exit
+./stack.sh:exit_trap:508  generate-subunit 1504715723 118
fail
+./stack.sh:exit_trap:509  [[ -z /opt/stack/logs ]]
+./stack.sh:exit_trap:512
/home/onossona/devstack/tools/worlddump.py -d /opt/stack/logs
World dumping... see /opt/stack/logs/worlddump-2017-09-06-163721.txt for
details
+./stack.sh:exit_trap:518  exit 1


Someone of you knows how to solve it?
Moreover, when networking-onos will be updated?

Thanks a lot


-- 
Silvia Fichera
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [Glance] [Cinder] [RFC] Introducing a de-centralized registry for Glance Images and Cinder Volumes

2017-09-06 Thread Constantinos Venetsanopoulos
Hello all,

My name is Constantinos Venetsanopoulos and I am a Founder and
the CEO of Arrikto. I'm writing to share what we believe are some
very interesting news related to OpenStack, and would love to get
the community's feedback and comments.

At Arrikto, we are building the world's first peer-to-peer network
for syncing and sharing snapshots of VMs and Containers.

Although the technology and product were not initially integrated
with OpenStack, after talking with the people who have designed,
built, and now run some of the world's largest OpenStack
installations, we found out that the problem we are trying to solve
exists in OpenStack deployments, too. Thus, we decided to integrate
with OpenStack. And it worked.

So, what is the problem?

A lot of people in the ecosystem are running either multiple smaller
OpenStack installations or multi-region larger ones. In both cases,
Glance stores are separate and they have to copy images around.
Similarly for Cinder, volumes are locked in a specific backend,
and it is very difficult to move them to a different location.

Currently, administrators try to solve the problem in various ways;
they may sync images manually, use some Glance-specific image
syncing tools, or finally, even try to sync the whole store.

These approaches are workarounds: They do not allow end users to see
or control the process, assume a single administrator controls all
underlying resources, cannot work efficiently when synchronizing
often-changing images from one source location to multiple locations,
and none addresses the problem for Cinder volumes.

Looking at the architecture, we thought that this is the definition of
a problem where a P2P network fits perfectly. In this scenario, our
software appears as a Glance store driver, Cinder volume driver, and
libvirt volume driver for Nova on all participating OpenStack
deployments/regions. Then:

1. An end user snapshots one or more disks, even of a running VM on
OpenStack. They publish their content to the network's Registry service.
Publishing creates a reference on the Registry, i.e., a short, unique
URL.

2. Another user, on the same or a completely distinct OpenStack
installation subscribes to this link. Their installation establishes
links with other subscribers, to form a P2P swarm and synchronize the
snapshot(s).

3. Finally, they can present a snapshot as a Glance image,
Cinder volume, or Nova instance directly on the destination.

In this manner, the network's Registry acts as a de-centralized
registry for Glance Images and Cinder Volumes; It only holds
references to the content, and all data are replicated at the
participating entities.

To learn more and see some demo videos, feel free to take a look
here:

http://www.arrikto.com/howitworks

We are currently testing the product with initial, select customers
and we would love to hear your comments, feedback, and questions on
the problem and proposed solution. If you've bumped on it yourself,
we definitely want to hear your thoughts.

Looking forward to hearing from you,
Constantinos

-- 
Constantinos Venetsanopoulos
CEO, Arrikto Inc.
c...@arrikto.com


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [OpenStack] Can Mitaka RamFilter support free hugepages?

2017-09-06 Thread Jay Pipes

On 09/06/2017 01:21 AM, Weichih Lu wrote:

Thanks for your response.

Is this mean if I want to create an instance with flavor: 16G memory 
(hw:mem_page_size=large), I need to preserve memory more than 16GB ?

This instance consume hugepages resource.


You need to reserve fewer 1GB huge pages than 50 if you want to launch a 
16GB instance on a host with 64GB of RAM. Try reserving 32 1GB huge pages.


Best,
-jay

2017-09-06 1:47 GMT+08:00 Jay Pipes >:


Please remember to add a topic [nova] marker to your subject line.
Answer below.

On 09/05/2017 04:45 AM, Weichih Lu wrote:

Dear all,

I have a compute node with 64GB ram. And I set 50 hugepages wiht
1GB hugepage size. I used command "free", it shows free memory
is about 12GB. And free hugepages is 50.


Correct. By assigning hugepages, you use the memory allocated to the
hugepages.

Then I launch an instance with 16GB memory, set flavor tag :
hw:mem_page_size=large. It show Error: No valid host was found.
There are not enough hosts available.


Right, because you have only 12G of RAM available after
creating/allocating 50G out of your 64G.

Huge pages are entirely separate from the normal memory that a
flavor consumes. The 16GB memory in your flavor is RAM consumed on
the host. The huge pages are individual things that are consumed by
the NUMA topology that your instance will take. RAM != huge pages.
Totally different things.

  And I check nova-scheduler log. My

compute is removed by RamFilter. I can launch an instance with
8GB memory successfully, or I can launch an instance with 16GB
memory sucessfully by remove RamFilter.


That's because RamFilter doesn't deal with huge pages. Because huge
pages are a different resource than memory. The page itself is the
resource.

The NUMATopologyFilter is the scheduler filter that evaluates the
huge page resources on a compute host and determines if the there
are enough *pages* available for the instance. Note that I say
*pages* because the unit of resource consumption for huge pages is
not MB of RAM. It's a single memory page.

Please read this excellent article by Steve Gordon for information
on what NUMA and huge pages are and how to use them in Nova:


http://redhatstackblog.redhat.com/2015/09/15/driving-in-the-fast-lane-huge-page-support-in-openstack-compute/



Best,
-jay

Does RamFilter only check free memory but not free hugepages?
How can I solve this problem?

I use openstack mitaka version.

thanks

WeiChih, Lu.

Best Regards.


___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to : openstack@lists.openstack.org

Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list:
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to : openstack@lists.openstack.org

Unsubscribe :
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack





___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] instance snapshot

2017-09-06 Thread Volodymyr Litovka

Hi,

in my installation I'm going to use volumes to boot instances, not 
ephemeral disks. And I faced unexpected (for me, at least) behaviour 
when trying to implement snapshoting: image, created with "openstack 
server image create" can't be used for boot from volume, i.e.


I create image using "openstack server image create --name jsnap jex-n1" 
and then:


- creating server using ephemeral disk
* openstack server create jTest [ ... ] --image jsnap
is *OK*

- creating server usingvolume, populated from image:
* openstack volume create jVol --size 8 *--image jsnap* --bootable
* openstack server create jTest [ ... ] --volume jVol
*FAIL**S* with the following error: "_Invalid image metadata. Error: A 
list is required in field img_block_device_mapping, not a unicode (HTTP 
400)_".


- creating server using volume, populated from snapshot (which 
corresponds to the image):
* openstack volume create jVol --size 8 *--snapshot 
f0ad0bf0-97f4-49df-b334-71b9eb639567* --bootable

* openstack server create jTest [ ... ] --volume jVol
is *OK*

Assuming this is correct (oh, really? can't find this topic in 
documentation), I don't need image as it's senseless entity (I can't use 
it for booting from volumes) and just need snapshot. But I still like 
https://blueprints.launchpad.net/nova/+spec/quiesced-image-snapshots-with-qemu-guest-agent 
so there are two questions regarding this:


1) whether the following sequence will do exactly the same as "server 
image create" does? -

- "manually" freeze VM filesystem (using "guest-fsfreeze-freeze" command)
- create snapshot using "openstack volume snapshot create --volume 
 --force "

- "manually" unfreeze VM filesystem (using "guest-fsfreeze-thaw" command)

2) and, when using Cinder API, is there way to synchronously wait for 
end of snapshot creation? It's useful in order to thaw filesystem 
immediately after snapshot will be done - neither after nor before few 
seconds.


Thanks.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [Octavia] octavia manual install guide?

2017-09-06 Thread 한승진
Hi, I'm trying to install octavia manually, howerve I'm not struggling with
a lot of error messages.

Is there any manual guide for octavia?

Thanks.
John Haan
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack