Re: [Users] Can't create directory /sys/fs/cgroup/memory/machine.slice/ Cannot allocate memory

2021-02-11 Thread Kir Kolyshkin
On Thu, Feb 11, 2021 at 12:59 AM Сергей Мамонов  wrote:
>
> And after migrate all containers to another node it still shows 63745 cgroups 
> -
>
> cat /proc/cgroups
> #subsys_name hierarchy num_cgroups enabled
> cpuset 7 2 1
> cpu 10 2 1
> cpuacct 10 2 1
> memory 2 63745 1

Looks like a leakage (or a bug in memory accounting which prevents
cgroup from being released).
You can check the number of memory cgroups with something like

find /sys/fs/cgroup/memory -type d | wc -l

If you see a large number, go explore those cgroups (check
cgroup.procs, memory.usage_in_bytes).

> devices 11 2 1
> freezer 17 2 1
> net_cls 12 2 1
> blkio 1 4 1
> perf_event 13 2 1
> hugetlb 14 2 1
> pids 3 68 1
> ve 6 1 1
> beancounter 4 3 1
> net_prio 12 2 1
>
> On Wed, 10 Feb 2021 at 18:47, Сергей Мамонов  wrote:
>>
>> And it is definitely it -
>> grep -E "memory|num_cgroups" /proc/cgroups
>> #subsys_name hierarchy num_cgroups enabled
>> memory 2 65534 1
>>
>> After migration some of containers to another node num_cgroups goes down to 
>> 65365 and it allowed to start stopped container without `
>> Can't create directory /sys/fs/cgroup/memory/machine.slice/1000133882: 
>> Cannot allocate memory` error.
>>
>> But I don't understand why num_cgroups for memory so big, yet.
>>
>> Like ~460 per container instead of  60 and less per container on other nodes 
>> (with the same kernel version too).
>>
>> On Wed, 10 Feb 2021 at 17:48, Сергей Мамонов  wrote:
>>>
>>> Hello!
>>>
>>> Looks like we reproduced this problem too.
>>>
>>> kernel - 3.10.0-1127.18.2.vz7.163.46
>>>
>>> Same error -
>>> Can't create directory /sys/fs/cgroup/memory/machine.slice/1000133882: 
>>> Cannot allocate memory
>>>
>>> Same ok output for
>>> /sys/fs/cgroup/memory/*limit_in_bytes
>>> /sys/fs/cgroup/memory/machine.slice/*limit_in_bytes
>>>
>>> Have a lot of free memory on node (per numa too).
>>>
>>> Only that looks really strange -
>>> grep -E "memory|num_cgroups" /proc/cgroups
>>> #subsys_name hierarchy num_cgroups enabled
>>> memory 2 65534 1
>>>
>>> huge nub_cgroups only on this node
>>>
>>> cat /proc/cgroups
>>> #subsys_name hierarchy num_cgroups enabled
>>> cpuset 7 144 1
>>> cpu 10 263 1
>>> cpuacct 10 263 1
>>> memory 2 65534 1
>>> devices 11 1787 1
>>> freezer 17 144 1
>>> net_cls 12 144 1
>>> blkio 1 257 1
>>> perf_event 13 144 1
>>> hugetlb 14 144 1
>>> pids 3 2955 1
>>> ve 6 143 1
>>> beancounter 4 143 1
>>> net_prio 12 144 1
>>>
>>> On Thu, 28 Jan 2021 at 14:22, Konstantin Khorenko  
>>> wrote:

 May be you hit memory shortage in a particular NUMA node only, for example.

 # numactl --hardware
 # numastat -m


 Or go hard way - trace kernel where exactly do we get -ENOMEM:

 trace the kernel function cgroup_mkdir() using /sys/kernel/debug/tracing/
 with function_graph tracer.


 https://lwn.net/Articles/370423/

 --
 Best regards,

 Konstantin Khorenko,
 Virtuozzo Linux Kernel Team

 On 01/28/2021 12:43 PM, Joe Dougherty wrote:

 I checked that, doesn't appear to be the case.

 # pwd
 /sys/fs/cgroup/memory
 # cat *limit_in_bytes
 9223372036854771712
 9223372036854767616
 2251799813685247
 2251799813685247
 9223372036854771712
 9223372036854771712
 9223372036854771712
 # cat *failcnt
 0
 0
 0
 0
 0

 # pwd
 /sys/fs/cgroup/memory/machine.slice
 # cat *limit_in_bytes
 9223372036854771712
 9223372036854767616
 9223372036854771712
 9223372036854771712
 9223372036854771712
 9223372036854771712
 9223372036854771712
 # cat *failcnt
 0
 0
 0
 0
 0



 On Thu, Jan 28, 2021 at 2:47 AM Konstantin Khorenko 
  wrote:
>
> Hi Joe,
>
> i'd suggest to check memory limits for root and "machine.slice" memory 
> cgroups
>
> /sys/fs/cgroup/memory/*limit_in_bytes
> /sys/fs/cgroup/memory/machine.slice/*limit_in_bytes
>
> All of them should be unlimited.
>
> If not - search who limit them.
>
> --
> Best regards,
>
> Konstantin Khorenko,
> Virtuozzo Linux Kernel Team
>
> On 01/27/2021 10:28 PM, Joe Dougherty wrote:
>
> I'm running into an issue on only 1 of my OpenVZ 7 nodes where it's 
> unable to create a directory on /sys/fs/cgroup/memory/machine.slice due 
> to "Cannot allocate memory" whenever I try to start a new container or 
> restart and existing one. I've been trying to research this but I'm 
> unable to find any concrete info on what could cause this. It appears to 
> be memory related because sometimes if I issue "echo 1 
> /proc/sys/vm/drop_caches" it allows me to start a container (this only 
> works sometimes) but my RAM usage is extremely low with no swapping 
> (swappiness even set to 0 for testing). Thank you in advance for your 
> help.
>
>
> Example:
> # vzctl start 9499
>>>

Re: [Users] OpenVZ 7 action scripts

2017-02-06 Thread Kir Kolyshkin

premount and postumount are OpenVZ legacy specific, they never made
its way to VZ (and OpenVZ 7). If you wish to have those, file an issue 
to Jira.


Yet better, work on it, and do a merge request (at https://src.openvz.org/)

Kir


On 02/06/2017 10:00 AM, Devon wrote:

Re: https://openvz.org/Man/vzctl.8#ACTION_SCRIPTS

Action scripts seem to be missing from the OpenVZ 7 documentation.

I was able to find some documentation for Virtuozzo: 
http://docs.virtuozzo.com/virtuozzo_7_command_line_reference/managing-containers/prlctlct.html#action-scripts 
however, the ones I have used the most in OpenVZ for custom 
configuration (premount and postmount) seem to be missing.


Are all the action scripts available and not documented or are these 
action scripts (specifically premount and postmount) no longer 
available?I can run some tests if necessary but was wondering if 
someone has already been using them.


Regards,
Devon


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] CT live migration stages duration

2017-01-23 Thread Kir Kolyshkin

On 01/22/2017 11:56 PM, kna...@gmail.com wrote:

Hello all!

I'd like to do some research on how long each stage of OpenVZ CT live 
migration takes.
I wonder if there is any single source of such information with 
particular timestamps? So far as I understood "-v" flag in vzmigrate 
command can't help me much with what I'd like to achieve. "-t" flag 
show some suspend statistics.


When doing live migration, a few timings might be interesting:
1. Frozen time (time for which the container was frozen)
2. Total migration time (with or without data migration)

From the availability point of view, frozen time is important and total 
migration time is not,
so we try to minimize the frozen time even sacrificing the total 
migration time. One example
of the strategy is using two rsyncs, before and after the freeze. If we 
would want to minimize

the total migration time, we'd do only once rsync after the freeze.

This is also the reason why vzmigrate -t shows various timings related 
to the freeze. Timings
of preparation and first rsync are not shown as this is not something we 
optimize for.


Anyway, as pointed out by Scott, it is easy to augment the script with 
showing more timestamps,

as it's a shell script.

Kir

The most relevant source is servers log both on source and destination 
ones. But it looks like I have to parse both of them to get the 
complete picture of the CT migration process.
I've tried different values for LOG_LEVEL in /etc/vz/vz.conf (1, 5, 10 
with following 'service vz restart') but it seems like an amount of 
details appearing in the log is not changed despite of what is written 
in 'man vz.conf':

LOG_LEVEL=number
  Set the logging level for the log file (does not affect 
console output).  The greater the number is, the more information will 
be logged to the LOGFILE. Default is 0, which means to log normal 
messages and errors. If set to -1, only errors will be logged.
From that description it's not clear what the maximum reasonable value 
for that parameter is).


Part of the /var/log/vzctl.log on source:
2017-01-23T10:28:23+0300 vzctl : CT 100 : Setting up checkpoint...
2017-01-23T10:28:23+0300 vzctl : CT 100 :   set CPU flags..
2017-01-23T02:28:23-0500 vzctl : CT 100 :   suspend...
2017-01-23T02:28:23-0500 vzctl : CT 100 :   get context...
2017-01-23T10:28:23+0300 vzctl : CT 100 : Checkpointing completed 
successfully

2017-01-23T10:28:23+0300 vzctl : CT 100 : Setting up checkpoint...
2017-01-23T10:28:23+0300 vzctl : CT 100 :   join context..
2017-01-23T02:28:23-0500 vzctl : CT 100 :   dump...
2017-01-23T10:28:23+0300 vzctl : CT 100 : Checkpointing completed 
successfully

2017-01-23T10:28:24+0300 vzctl : CT 100 : Killing...
2017-01-23T10:28:24+0300 vzctl : CT 100 :   put context
2017-01-23T10:28:24+0300 vzctl : CT 100 : The ploop library has been 
loaded successfully

2017-01-23T10:28:24+0300 : Unmounting file system at /vz/root/100
2017-01-23T10:28:24+0300 : Unmounting device /dev/ploop39934
2017-01-23T10:28:25+0300 vzctl : CT 100 : Container is unmounted
2017-01-23T10:28:25+0300 vzctl : CT 100 : Destroying container private 
area: /vz/private/100
2017-01-23T10:28:26+0300 vzctl : CT 100 : Container private area was 
destroyed
2017-01-23T10:28:45+0300 vzctl : CT 100 : stat(/vz/root/100): No such 
file or directory
2017-01-23T10:28:46+0300 vzctl : CT 100 : CT configuration saved to 
/etc/vz/conf/100.conf
2017-01-23T10:28:46+0300 vzctl : CT 100 : CT configuration saved to 
/etc/vz/conf/100.conf

2017-01-23T10:29:01+0300 vzctl : CT 100 : Restoring container ...
2017-01-23T10:29:01+0300 vzctl : CT 100 : The ploop library has been 
loaded successfully
2017-01-23T10:29:01+0300 : Opening delta 
/vz/private/100/root.hdd/root.hdd
2017-01-23T10:29:01+0300 : Adding delta dev=/dev/ploop39934 
img=/vz/private/100/root.hdd/root.hdd (rw)

2017-01-23T10:29:01+0300 : Running: fsck.ext4 -p /dev/ploop39934p1
2017-01-23T10:29:01+0300 : Mounting /dev/ploop39934p1 at /vz/root/100 
fstype=ext4 data='balloon_ino=12,'

2017-01-23T10:29:01+0300 vzctl : CT 100 : Container is mounted
2017-01-23T10:29:01+0300 vzctl : CT 100 : Running: 
/usr/libexec/vzctl/scripts/vps-prestart

2017-01-23T10:29:01+0300 vzctl : CT 100 : Setting CPU units: 1000
2017-01-23T10:29:01+0300 vzctl : CT 100 : Setting CPUs: 2
2017-01-23T10:29:01+0300 vzctl : CT 100 : Setting devices
2017-01-23T10:29:01+0300 vzctl : CT 100 : Container start in progress...
2017-01-23T10:29:01+0300 vzctl : CT 100 : Restoring completed 
successfully

2017-01-23T10:29:01+0300 vzctl : CT 100 : Resuming...
2017-01-23T10:29:01+0300 vzctl : CT 100 :   put context

A part of the /var/log/vzctl.log on destination (lines with ' 
Executing command: /bin/cat /proc/net/dev' and 'Executing command: 
free -k ' are removed):
2017-01-23T10:28:08+0300 vzctl : CT 100 : stat(/vz/root/100): No such 
file or directory
2017-01-23T10:28:09+0300 vzctl : CT 100 : CT configuration saved to 
/etc/vz/conf/100.conf
2017-01-23T10:28:09+0300 vzct

[Users] OpenVZ vz Virtuozzo

2016-11-22 Thread Kir Kolyshkin

Hi,

I'm writing this clarify the distinction between OpenVZ and Virtuozzo,
and the relation between the two.

Fortunately, there is no need to tell you what OpenVZ is, so I'll happily
skip that part, except for one important thing. OpenVZ is a technology,
not a product, and was never intended to be a product.

The product, actually, a set of products, a platform, is Virtuozzo.
We build OpenVZ for the community and enthusiasts.We build Virtuozzo
on top of OpenVZ for companies whose business dependson it.

Virtuozzo is based on Linux, OpenVZ, and many other great technologies
(which we are contributing back to) and it provides everything that
OpenVZ does, plus extras such as:

* 24x7 Support
The Virtuozzo Support team is fantastic, and all they do is help customers
with Virtuozzo. I can't say enough good things about them -- they always
impress me with their professionalism (and I'm not one who is easily
impressed).

* Virtuozzo Storage
This is a software-defined storage solution, a distributed file system
that combines the local storage space on all servers into a big pool
and makes it available from every server (side benefit: no need to copy
the data during live migration). Virtuozzo Storage automatically
handles redundancy, optimizes performance, improves efficiency
and does other magical things.

* ReadyKernel
This is a kpatch-based mechanism that lets you update the kernel
without a reboot. In light of recent security holes, the possibility
to patch a vulnerability instantly, without waiting for scheduled
downtime, is truly priceless.

* More Tools, More Features
This includes, but is not limited to, backup, management web interface,
and monitoring tools.

There's more to it, but let's keep it simple.

To learn more, please see https://virtuozzo.com/openvz

Regards,
  Kir.

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Hostname issue on CentOS 7.x

2016-11-11 Thread Kir Kolyshkin

On 11/10/2016 08:33 PM, Jean-Pierre Abboud wrote:

Hello everyone,

We’re facing an issue on many CentOS 7.x containers running cPanel. Clients are 
getting emails saying that the hostnames are not valid, for example it will 
show server1   instead of the fully qualified domain name server1.domain.com

I have tried changing the hostname via cPanel or even manually but after 
rebooting it goes back to server1

The container’s openvz config does contain the appropriate HOSTNAME line
HOSTNAME=“server1.domain.com"

[root@server1 network-scripts]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost localhost4.localdomain4 localhost4
# Auto-generated hostname. Please do not remove this comment.
104.245.200.10 server1.domain.com localhost6 localhost6.localdomain6  server1 
localhost.localdomain
::1 localhost

[root@server1 network-scripts]# cat /etc/hostname
server1

[root@server1 network-scripts]# cat /proc/sys/kernel/hostname
server1

[root@server1 network-scripts]# cat /etc/sysconfig/network
NETWORKING="yes"
GATEWAYDEV="venet0"
NETWORKING_IPV6="yes"
IPV6_DEFAULTDEV="venet0"
HOSTNAME=server1.domain.com
DOMAINNAME=domain.com

[root@server1 network-scripts]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)

Please note that I replaced the client’s domain and IP for privacy reasons.


It looks like a correct behaviour to me, since hostname(5)
man page says that hostname should not contain dots,
i.e. not be FQDN, so vzctl strips the domain name out of it.

What does "hostname -f" shows? Note -f flag means FQDN
("fully qualified domain name", i.e. with domain).

Kir
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ploop block size

2016-04-18 Thread Kir Kolyshkin

On 04/15/2016 06:11 PM, Nick Knutov wrote:

I think I saw it in the wiki but was unable to find now

How to create ploop CT with vzctl create using smaller ploop block 
size then defaut 1MB ? Can I change it in some config file?




This functionality is not available from vzctl, but if you create ploop 
using "ploop init"
you can use -b argument to specify ploop block size. Here's an excerpt 
from ploop(8)

man page:

   Basic commands

   init

   Create  and  initalize  a  ploop image file and a corresponding 
DiskDe‐

   scriptor.xml file.

   ploop init -s size [-f format] [-v version] [-t fstype] [-b 
blocksize]

  [-B fsblocksize] [--nolazy] delta_file



   -b blocksize
  Device block size, in 512 byte sectors. Default 
block  size

  is 2048 sectors, or 1 megabyte.



So, you can experiment with various block sizes and share your findings 
with us.
I played with it a bit, but I was mostly interested in metadata overhead 
for small
(read empty) deltas, and found out that the block size of 512KB is a bit 
better
in that regard than the default 1M block size. Having said that, I 
haven't done

any performance comparisons -- obviously, block size should have noticeable
performance impact.

Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ploop mount question

2016-03-30 Thread Kir Kolyshkin

Hi Simon,

First, please use users@ mailing list for further communication. The 
rationale behind this

is pretty simple, let me explain.

In most of the software projects, a number of users is way higher than 
the number of
developers. Similarly, a number of people who can ask questions is way 
higher than
the number of people who can answer (developers plus seasoned users). 
Therefore,
using 1:1 interaction between people in these two groups is not 
scalable; in other words,

the "answer" people could not cover all the questions.

Tools such as mailing lists (and any other way of public communication, 
like wikis,
IRC (with logs), forums, social networks, question/answer sites like 
stackoverflow
and so on, together with search tools like Google) helps to mitigate 
this problem.


That is why I am again ccing users AT openvz.org list and again ask you 
to post
your questions to the list, so other people can benefit from my answers, 
not just you.
It's also a win for you, as other people can contribute answers, not 
just me, or

review and correct my answers.

Anyway, please subscribe to and use users@ mailing list
(https://lists.openvz.org/mailman/listinfo/users), and see my answers below.

On 03/29/2016 11:22 PM, Simon Choucroun wrote:

Hi Kir,

Thanks for the suggestions! I was able to mount with nosuid,noexec 
with your instructions.


I am contacting you again today because i think i may have found a 
small bug with ploop.


My script basically generates a ploop device with the ploop init 
command. It then keeps the ploop device id and i use that to remount 
at reboot. However i have noticed that my mounts are not working at 
reboot due to the fact that everytime i use ploop mount , the ploop 
device id is randomly generated.


The assumption that the device name is persistent is incorrect. You 
should always obtain
a device name from the output of 'ploop mount' (or use other means to 
discover it).




I tried setting the name manually with ploop mount -d 
/dev/ploopstaging3 but it fails.


I believe this is no longer supported (and might need to be removed).



Another issue i discovered:

When creating a new ploop device with the ploop init command, it 
finishes by unmounting /dev/ploopID
However, it does not seem to actually unmount it.  Also, if i then do 
a ploop mount command,

it keeps the same ploop id and does not generate a new random ID


This makes sense to me.



Please see commands below:

*ploop init -s 500g -t ext4 /mounts/staging3/staging3.hdd*

*Results*:

Creating delta /mounts/staging3/staging3.hdd bs=2048 size=1048576000 
sectors v2

Adding snapshot {5fbaabe3-6958-40ff-92a7-860e329aab41}
Storing /mounts/staging3/DiskDescriptor.xml
Opening delta /mounts/staging3/staging3.hdd
Adding delta dev=/dev/ploop48321 img=/mounts/staging3/staging3.hdd (rw)
Running: parted -s /dev/ploop48321 mklabel gpt mkpart primary 1048576b 
536869863423b
Running: mkfs -t ext4 -j -b4096 
-Eresize=4294967295,lazy_itable_init=1,lazy_journal_init=1 -Jsize=128 
-i16384 /dev/ploop48321p1
Running: mkfs -t ext4 -j -b4096 -Eresize=4294967295,lazy_itable_init=1 
-Jsize=128 -i16384 /dev/ploop48321p1

mke2fs 1.41.12 (17-May-2010)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1 blocks, Stripe width=0 blocks
32768000 inodes, 131071488 blocks
6553574 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
4000 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 
2654208,

4096000, 7962624, 11239424, 2048, 23887872, 71663616, 78675968,
10240

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
Running: /sbin/tune2fs -ouser_xattr,acl -c0 -i0 -eremount-ro 
/dev/ploop48321p1

tune2fs 1.41.12 (17-May-2010)
Setting maximal mount count to -1
Setting error behavior to 2
Setting interval between checks to 0 seconds
Creating balloon file .balloon-c3a5ae3d-ce7f-43c4-a1ea-c61e2b4504e8
Mounting /dev/ploop48321p1 at /mounts/staging3/staging3.hdd.mnt 
fstype=ext4 data=''

Unmounting device /dev/ploop48321

*ls /dev/ploop**
/dev/ploop48321  /dev/ploop48321p1

*ploop umount -d /dev/ploop48321*
Unmounting device /dev/ploop48321

*ls /dev/ploop**
/dev/ploop48321


I don't see any issue with this either. The device is in stopped state, 
and might be reused.


# cat /sys/block/ploop35205/pstate/running
0
# ls /sys/block/ploop35205/pdelta/
(nothing)
#




On 2016-03-22 12:52 AM, Kir Kolyshkin wrote:

On 03/21/2016 06:58 PM, Simon Choucroun wrote:

Hi Kir,

Sorry to e-mail you , I know that you must be really busy with VZ 
an

Re: [Users] ploop mount question

2016-03-21 Thread Kir Kolyshkin

On 03/21/2016 06:58 PM, Simon Choucroun wrote:

Hi Kir,

Sorry to e-mail you , I know that you must be really busy with VZ and 
CRIU these days but i am looking for a solution and have looked 
everywhere without any concrete answer, maybe you can help.


I am trying to create a internal product that is using ploop as the 
device image( much better than loop!)


The issue i am having is that i am trying to mount the ploop image 
with noexec,nosuid for enhanced security. When i pass it to the -o 
parameter, it is erroring out.


ploop mount -o nosuid,noexec -m /backup/staging 
/mounts/staging/DiskDescriptor.xml


I also checked the documentation for ploop but unfortunately, there is 
no option explanation or example for the -o flag.


Hi Simon,

The value of the -o option is passed directly to the mount() syscall, as 
the "data"
argument, and it might contain some fs-specific options. Here's an 
excerpt from

mount(2) man page:

   The  data argument is interpreted by the different file 
systems.  Typi-
   cally it is a string of comma-separated options understood by 
this file
   system.   See  mount(8)  for  details of the options available 
for each

   filesystem type.

Now, options MS_NOEXEC and MS_NOSUID are not fs-specific but generic.
Unfortunately, currently there's no way to pass those to ploop command
(although it's relatively easy to add).

A workaround would be to mount ploop as device only, and then use usual 
"mount"

command to actually mount the fs. Example:

[root@tpad-ovz1 root.hdd]# ploop mount DiskDescriptor.xml
Opening delta /vz/private/202/root.hdd/root.hdd
Adding delta dev=/dev/ploop32746 img=/vz/private/202/root.hdd/root.hdd (rw)

[root@tpad-ovz1 root.hdd]# mount -o noexec,nosuid /dev/ploop32746p1 mnt

As you can see, you need to figure out the ploop device (and add p1 to 
it for a partition).
You can figure it out by e.g. parsing the output of "ploop mount" or 
"ploop list".


Let me know if you have any more questions, and I am Ccing users@ list 
as there

might be some people who are also interested in that.

Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] LXC in Debian 7 inside Debian 7 OVZ

2016-03-06 Thread Kir Kolyshkin



On 03/06/2016 09:00 AM, Narcis Garcia wrote:

I'm using Linux 2.6.32-openvz-042stab113.11-i386 in a Debian Wheezy
server with Debian Wheezy containers.

I've tried to install previously LXC at hardware node, but cannot create
anything as /sys/fs/cgroup
(cannot create directory `/sys/fs/cgroup': No such file or directory)

Then lxc-checkconfig (at HN) notes 2 problems:
Cgroup namespace: CONFIG_CGROUP_NS missing
Cgroup memory controller: missing

Does anybody know if some of these guides are useful to use LXC inside
an OpenVZ container?
https://openvz.org/Docker_inside_CT
https://wiki.debian.org/LXC


You can not use LXC inside OpenVZ container. In other words, nested 
containers

are not supported, except for Docker containers.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Virtuozzo7 - ext4 and vzpkg

2016-02-12 Thread Kir Kolyshkin



On 02/12/2016 08:00 AM, Axton wrote:
Template creation fails if /var/tmp is xfs, which is the default file 
system on RHEL7.  The documentation for installation does not 
highlight this requirement for /var/tmp; it does for /vz though.  
Reference:

https://openvz.org/Quick_installation


Please file a bug telling that vzpkg cache requires /var/tmp to use ext4.



Here is the full output.  Resolved by formatting /var/tmp with ext4 
instead of xfs.


An easier way would be to symlink /var/tmp -> /vz/tmp


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ legacy on Skylake CPU / 2.6 EOL

2016-02-10 Thread Kir Kolyshkin

Yet better, check OpenVZ plans.

From https://openvz.org/Download/kernel

> RHEL6
> ...
> EOL: Nov 2019

So you're good for quite some time.

On 02/10/2016 01:24 PM, CoolCold wrote:

Hello!
As it is based on RHEL kernels, you should really be checking RHELs
plans for 2.6.32 support

On Wed, Feb 10, 2016 at 11:02 PM, Volker Janzen  wrote:

Hi,

is somebody on this list running OpenVZ legacy kernel 2.6.32 on Intel Skylake 
CPUs?

As far as I know kernel 2.6.32 has EOL end of February, does this influence 
OpenVZ?


Regards
 Volker


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users





___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Quickly add/subtract space/inodes

2016-02-08 Thread Kir Kolyshkin



On 02/08/2016 12:56 PM, Scott Dowdle wrote:

Greetings,

- Original Message -

Is there an easy way to add/subtract  diskspace/inodes without
needing to know the current numbers?

For example, user is near or at their max numbers but cPanel just
released an update that requires immediate updating before they
release the full disclosure, so need to add 5G or 500k inodes to get
said upcp to run and then remove the extra space/inodes?

I did not see anything that would indicate it is possible via the
vzctl docs, but figured I would ask.

Are you talking about simfs?

You can sniff the current values out of the config and calculate the desired 
new value.  Not quite what you wanted... but still.

vzctl set {ctid} --diskinodes  {n}:{n} --save

 From the vzctl man page:

- - - - -
--diskinodes num[:num]
  sets soft and hard disk quota limits, in i-nodes. First parameter is soft 
limit, second is hard limit.

Note that this parameter is ignored for ploop layout.


...unless specified during container creation. For more details,
see https://openvz.org/Ploop/diskinodes


- - - - -

I seem to remember some form that allowed you to do math in the parameter 
value... but darn if I can find an example of that now.


Should be pretty trivial

DI_S=$(vzlist -H -o diskinodes.s $CTID)
DI_H=$(vzlist -H -o diskinodes.h $CTID)
NEW_DI_S=$((DI_S + 50))
NEW_DI_H=$((DI_H + 50))
echo "CTID $CTID: increasing diskinodes from $DI_S:$DI_H to 
$NEW_DI_S:$NEW_DI_H"

vzctl set $CTID --diskinodes $NEW_DI_S:$NEW_DI_H --save

Same for diskspace.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] unable to stop/suspend/kill CT

2016-02-04 Thread Kir Kolyshkin

OK, I filed a bug about this:

https://bugs.openvz.org/browse/OVZ-6678

As my workaround worked for you too, I believe you hit the same bug.
Can you check CT config for CPU-related parameters (CPUS, CPUUNITS,
and CPULIMIT)?

On 02/04/2016 01:21 PM, Bogdan-Stefan Rotariu wrote:

On 04 Feb 2016, at 21:02, Kir Kolyshkin  wrote:

Hi Bogdan,

Hi!


This looks very much like a cpu scheduler lockup, as many of the processes
belonging to the container are in R state but not running.

Can you try resetting the cpulimit for the container in question, something like

vzctl set $CTID --cpulimit 0

Hah, this was the fix, i did try all the possibilities I know.
The CT did shut down correctly after this!


and see if anything changes?

Also, take a look at cpu.stat for some of the processes that is in such state?
Say, this one:
root  107398  0.0  0.0  25460   396 ?Rs   12:19   0:00 vzctl exec 
111 ps

cat /proc/vz/fairsched/107398/cpu.stat

If throttled_time is big, it means my hypothesis makes sense.

I am also ccing Vladimir, who knows a thing or two about our fair cpu scheduler.

Sorry, forgot to retrieve the info before setting cpulimit

Thank you Sir, you saved me from a reboot!


Kir.

On 02/04/2016 05:48 AM, Bogdan-Stefan Rotariu wrote:

Hi there,

We are having issues with one container that cannot be stopped/suspended or 
killed, all commands remain in Sleep or Running Sleep.
Any ideea how to stop this container withour rebooting the main machine?
We did try to kill all proceeses, they do not die.

  CTID  NPROC STATUSIP_ADDR HOSTNAME
   111100 running   a.b.c.d server.name


[3839648.976835] CPT ERR: 8803dd109000,111 :foreign process 
14243/9892(bash) inside CT (e.g. vzctl enter or vzctl exec).
[3839648.976842] CPT ERR: 8803dd109000,111 :suspend is impossible now.
[3839649.977756] CPT ERR: 8803dd109000,111 :foreign process 
14243/9892(bash) inside CT (e.g. vzctl enter or vzctl exec).
[3839649.977764] CPT ERR: 8803dd109000,111 :suspend is impossible now.
[3839650.978718] CPT ERR: 8803dd109000,111 :foreign process 
14243/9892(bash) inside CT (e.g. vzctl enter or vzctl exec).
[3839650.978726] CPT ERR: 8803dd109000,111 :suspend is impossible now.
[3839665.639557] CPT ERR: 880839216000,111 :foreign process 
14243/9892(bash) inside CT (e.g. vzctl enter or vzctl exec).
[3839665.639564] CPT ERR: 880839216000,111 :suspend is impossible now.
[3839666.640019] CPT ERR: 880839216000,111 :foreign process 
14243/9892(bash) inside CT (e.g. vzctl enter or vzctl exec).

root   19890  0.0  0.0  25460   376 ?Rs   03:34   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root   39626  0.0  0.0  25460   376 ?Rs   03:44   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root   65503  0.0  0.0  27560   412 ?Rs   11:59   0:00 vzctl enter 
111
root   65508  0.0  0.0  27560   416 ?Rs   11:59   0:00 vzctl enter 
111
root   65522  0.0  0.0  27560   416 ?Rs   11:59   0:00 vzctl enter 
111
root   73329  0.0  0.0  25460   372 ?Rs   12:00   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root   73371  0.0  0.0  25460   380 ?Rs   12:00   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root   74865  0.0  0.0  25464   408 ?Rs   12:00   0:00 vzctl stop 
111
root   75864  0.0  0.0  25464   412 ?Rs   12:04   0:00 vzctl stop 
111
root   85384  0.0  0.0  25464   404 ?Rs   12:08   0:00 vzctl stop 
111
root   96674  0.0  0.0  25464   412 ?Rs   12:12   0:00 vzctl stop 
111
root   96787  0.0  0.0  25464   408 ?Rs   12:13   0:00 vzctl stop 
111 --fast
root  107300  0.0  0.0  27560   412 ?Rs   12:18   0:00 vzctl enter 
111
root  107398  0.0  0.0  25460   396 ?Rs   12:19   0:00 vzctl exec 
111 ps
root  116638  0.0  0.0 108168  1368 ?S12:21   0:00 sh -c 
/usr/sbin/vzctl exec 111 cat /proc/meminfo | grep --max-count=1 'MemFree' | awk 
'{print $2}'
root  116639  0.0  0.0  25460  1024 ?S12:21   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  116642  0.0  0.0  25460   364 ?S12:21   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  116643  0.0  0.0  25460   384 ?Rs   12:21   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  116650  0.0  0.0  25460   384 ?Rs   12:21   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  117653  0.0  0.0  25460   380 ?Rs   12:22   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  117746  0.0  0.0 108168  1368 ?S12:22   0:00 sh -c 
/usr/sbin/vzctl exec 111 cat /proc/meminfo | grep --max-count=1 'MemFree' | awk 
'{print $2}'
root  117747  0.0  0.0 108168  1368 ?S12:22   0:00 sh -c 
/usr/sbin/vzctl exec 111 cat /proc/meminfo | grep --max-count=1 'MemFree' | awk 
'{print $2}'
root  117748 

Re: [Users] unable to stop/suspend/kill CT

2016-02-04 Thread Kir Kolyshkin

Hi Bogdan,

This looks very much like a cpu scheduler lockup, as many of the processes
belonging to the container are in R state but not running.

Can you try resetting the cpulimit for the container in question, 
something like


vzctl set $CTID --cpulimit 0

and see if anything changes?

Also, take a look at cpu.stat for some of the processes that is in such 
state?

Say, this one:
root  107398  0.0  0.0  25460   396 ?Rs   12:19   0:00 vzctl 
exec 111 ps


cat /proc/vz/fairsched/107398/cpu.stat

If throttled_time is big, it means my hypothesis makes sense.

I am also ccing Vladimir, who knows a thing or two about our fair cpu 
scheduler.


Kir.

On 02/04/2016 05:48 AM, Bogdan-Stefan Rotariu wrote:

Hi there,

We are having issues with one container that cannot be 
stopped/suspended or killed, all commands remain in Sleep or Running 
Sleep.

Any ideea how to stop this container withour rebooting the main machine?
We did try to kill all proceeses, they do not die.

  CTID  NPROC STATUSIP_ADDR HOSTNAME
   111100 running   a.b.c.d server.name


[3839648.976835] CPT ERR: 8803dd109000,111 :foreign process 
14243/9892(bash) inside CT (e.g. vzctl enter or vzctl exec).
[3839648.976842] CPT ERR: 8803dd109000,111 :suspend is impossible 
now.
[3839649.977756] CPT ERR: 8803dd109000,111 :foreign process 
14243/9892(bash) inside CT (e.g. vzctl enter or vzctl exec).
[3839649.977764] CPT ERR: 8803dd109000,111 :suspend is impossible 
now.
[3839650.978718] CPT ERR: 8803dd109000,111 :foreign process 
14243/9892(bash) inside CT (e.g. vzctl enter or vzctl exec).
[3839650.978726] CPT ERR: 8803dd109000,111 :suspend is impossible 
now.
[3839665.639557] CPT ERR: 880839216000,111 :foreign process 
14243/9892(bash) inside CT (e.g. vzctl enter or vzctl exec).
[3839665.639564] CPT ERR: 880839216000,111 :suspend is impossible 
now.
[3839666.640019] CPT ERR: 880839216000,111 :foreign process 
14243/9892(bash) inside CT (e.g. vzctl enter or vzctl exec).


root   19890  0.0  0.0  25460   376 ?Rs   03:34   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root   39626  0.0  0.0  25460   376 ?Rs   03:44   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root   65503  0.0  0.0  27560   412 ?Rs   11:59   0:00 
vzctl enter 111
root   65508  0.0  0.0  27560   416 ?Rs   11:59   0:00 
vzctl enter 111
root   65522  0.0  0.0  27560   416 ?Rs   11:59   0:00 
vzctl enter 111
root   73329  0.0  0.0  25460   372 ?Rs   12:00   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root   73371  0.0  0.0  25460   380 ?Rs   12:00   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root   74865  0.0  0.0  25464   408 ?Rs   12:00   0:00 
vzctl stop 111
root   75864  0.0  0.0  25464   412 ?Rs   12:04   0:00 
vzctl stop 111
root   85384  0.0  0.0  25464   404 ?Rs   12:08   0:00 
vzctl stop 111
root   96674  0.0  0.0  25464   412 ?Rs   12:12   0:00 
vzctl stop 111
root   96787  0.0  0.0  25464   408 ?Rs   12:13   0:00 
vzctl stop 111 --fast
root  107300  0.0  0.0  27560   412 ?Rs   12:18   0:00 
vzctl enter 111
root  107398  0.0  0.0  25460   396 ?Rs   12:19   0:00 
vzctl exec 111 ps
root  116638  0.0  0.0 108168  1368 ?S12:21   0:00 sh 
-c /usr/sbin/vzctl exec 111 cat /proc/meminfo | grep --max-count=1 
'MemFree' | awk '{print $2}'
root  116639  0.0  0.0  25460  1024 ?S12:21   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  116642  0.0  0.0  25460   364 ?S12:21   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  116643  0.0  0.0  25460   384 ?Rs   12:21   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  116650  0.0  0.0  25460   384 ?Rs   12:21   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  117653  0.0  0.0  25460   380 ?Rs   12:22   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  117746  0.0  0.0 108168  1368 ?S12:22   0:00 sh 
-c /usr/sbin/vzctl exec 111 cat /proc/meminfo | grep --max-count=1 
'MemFree' | awk '{print $2}'
root  117747  0.0  0.0 108168  1368 ?S12:22   0:00 sh 
-c /usr/sbin/vzctl exec 111 cat /proc/meminfo | grep --max-count=1 
'MemFree' | awk '{print $2}'
root  117748  0.0  0.0  25460  1016 ?S12:22   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  117749  0.0  0.0  25460  1020 ?S12:22   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  117754  0.0  0.0  25460   360 ?S12:22   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  117755  0.0  0.0  25460   356 ?S12:22   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  117756  0.0  0.0  25460   380 ?Rs   12:22   0:00 
/usr/sbin/vzctl exec 111 cat /proc/meminfo
root  117757  0.0  0.0  25460   376 ?Rs   12:

Re: [Users] Got V7 NAT for containers figured out... sorta

2016-01-28 Thread Kir Kolyshkin



On 01/27/2016 05:17 PM, Scott Dowdle wrote:

Greetings,

So following this wiki page:
https://wiki.openvz.org/Using_NAT_for_container_with_private_IPs

I noticed that /etc/modprobe.d/parallels.conf needed to be edited to change 
ip_conntrack_disable_ve0=1 to ip_conntrack_disable_ve0=0.

Then my SNAT rule worked:
/usr/sbin/iptables -t nat -A POSTROUTING -s 192.168.0.1/24 -o br0 -j SNAT --to 
{host-ip-address}

I put that rule in /etc/rc.local and rebooted... but it doesn't seem to take 
affect unless manually run post boot.


I think systemd no longer runs /etc/rc.local.

Most probably you need to figure out firewalld configuration, as 
firewalld is used by RHEL/CentOS 7 by default.


Once you'll figure this out, you are very welcome to share the knowledge 
on wiki!


Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Taking Virtuozzo 7 for a spin

2016-01-27 Thread Kir Kolyshkin



On 01/27/2016 04:37 PM, Scott Dowdle wrote:

Greetings,

Following the V7 development info today, I decided to give the Beta 3 build a 
try... and did a fresh install.

The install went great and I gave my V7 host a public IP address.  I don't have 
any other public IP addresses to play with at the moment so I decided to make a 
container and give it a 192.168.0.x address.  It is not routing to the outside 
world yet and I'm not sure what the problem is.

Looking here I see:

# cat /proc/sys/net/ipv4/ip_forward
1

I tried doing this:
# iptables -t nat -A POSTROUTING -s 192.168.0.1/24 -o br0 -j SNAT --to 
{host-ip-address}


As per https://openvz.org/NAT, you need to enable NAT for the host 
system, ie


1. grep ip_conntrack_disable_ve0 /etc/modprobe.d/*

2. Make sure it is set to 0

3. Reboot (or unload netfilter/iptables modules and load them again).

It is probably disabled by default as not everyone is using it,
and it slows down the host networking (not too much, but enough
to notice for gigabit Ethernet or faster speeds).



But that yields:
iptables v1.4.21: can't initialize iptables table `nat': Table does not exist 
(do you need to insmod?)

Looking at lsmod's output:
# lsmod | grep nat
iptable_nat12875  0
nf_nat_ipv414115  1 iptable_nat
nf_nat 26146  1 nf_nat_ipv4
nf_conntrack  105843  4 
nf_nat,nf_nat_ipv4,xt_conntrack,nf_conntrack_ipv4
ip_tables  27239  3 iptable_filter,iptable_mangle,iptable_nat

I'm not sure what I'm doing wrong... and looking in the sizable documentation 
(http://docs.openvz.org/) has not been fruitful.

Anyone have a clue what I need to do to make it NAT my private container?

TYL,


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] What's up with the missing numpy in V7?

2016-01-21 Thread Kir Kolyshkin

On 01/21/2016 08:29 AM, Scott Dowdle wrote:

Greetings,

- Original Message -

Thank you for highlighting this issue, answered in the bug.
In short - we base on top of our own Virtuozzo Linux repository and
forgot that users install our packages somewhere else.
We'll think what can be done there.

What are you talking about?  The system I'm having the issue on is a pure 
Virtuozzo 7 Beta system using the repos provided by Virtuozzo 7.  I may have 
added EPEL but that would make more packages available, not less.  Perhaps I 
misunderstood... but you made it sound like I was using V7 packages on say... a 
stock CentOS install and I am not.

TYL,


Scott,

I think the problem is with CloudLinux repo (which we no longer use) 
replacing the CentOS repo.
I found this on one machine running early VZ7 beta, and had to do the 
following to rectify:


wget 
http://mirrors.kernel.org/centos/7/os/x86_64/Packages/centos-release-7-2.1511.el7.centos.2.10.x86_64.rpm

rpm -e --nodeps cloudlinux-release
rpm -ihv centos-release-7-2.1511.el7.centos.2.10.x86_64.rpm
yum update

Alternative, of course, is to use vzlinux, which is sort of another rhel 
clone (or rebuild).

By reading your later email I deduce this is what you did.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Warnings treated as errors

2016-01-06 Thread Kir Kolyshkin



On 01/06/2016 02:00 AM, dptr...@arcor.de wrote:

Hello everyone

Could you please tell me what and in which version you use to build ploop, 
vzctl, and vzquota? I am using GCC 5.3 and get a lot of warnings (Inline, .) 
which are treated as errors according to -Werror in CFLAGS.


I just tried to build git versions of ploop, vzctl, and vzquota (from 
legacy OpenVZ)
using gcc-5.3.1 from Fedora 23, and I see zero warnings or other such 
issues.


So, please give us some more details.

Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenStack on Virtuozzo 7, missing file centos7-exe.hds.tar.gz

2015-12-10 Thread Kir Kolyshkin



On 12/10/2015 09:23 AM, Björn Lindgren wrote:

Hi,

I'm in the process of installing OpenStack on top of Virtuozzo 7.

Following the guide at https://openvz.org/Setup_OpenStack_with_Virtuozzo_7

The link to download file centos7-exe.hds.tar.gz does not work. The 
web server returns 404.


I just checked and the link works for me:

[kir@kir-tpad ~]$ wget 
http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz
--2015-12-10 11:23:52-- 
http://updates.virtuozzo.com/server/virtuozzo/en_us/odin/7/techpreview-ct/centos7-exe.hds.tar.gz

Resolving updates.virtuozzo.com (updates.virtuozzo.com)... 72.21.81.253
Connecting to updates.virtuozzo.com 
(updates.virtuozzo.com)|72.21.81.253|:80... connected.

HTTP request sent, awaiting response... 200 OK
Length: 215449059 (205M) [application/octet-stream]
Saving to: ‘centos7-exe.hds.tar.gz’

centos7-exe.hds.tar   3%[  ]   7.91M 
9.88MB/s ^C


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ipv6 connectivity to http://ftp.openvz.org

2015-11-18 Thread Kir Kolyshkin

It seems it's working now. If it's not working for you, please provide
a traceroute6 output.

Kir

On 11/13/2015 07:03 AM, CoolCold wrote:

Hello!
Looks like ftp.openvz.org resolvs via ipv6 too, but not working well:

root@mu2 /home/coolcold # wget -O -
'http://ftp.openvz.org/debian/archive.key'|apt-key add -
--2015-11-13 17:59:40--  http://ftp.openvz.org/debian/archive.key
Resolving ftp.openvz.org (ftp.openvz.org)... 2620:e6::104:11, 199.115.104.11
Connecting to ftp.openvz.org (ftp.openvz.org)|2620:e6::104:11|:80...
failed: Connection timed out.
Connecting to ftp.openvz.org (ftp.openvz.org)|199.115.104.11|:80... connected.


checking with telnet:
root@mu2 /home/coolcold # telnet -6 ftp.openvz.org 80
Trying 2620:e6::104:11...
telnet: Unable to connect to remote host: Connection timed out


and yandex is ok:
coolcold@mu2:~/gits/flashcache$ telnet -6 www.ya.ru 80
Trying 2a02:6b8::3...
Connected to ya.ru.
Escape character is '^]'.
^]
telnet> quit
Connection closed.


should you disable ipv6 resolve?



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ipv6 connectivity to http://ftp.openvz.org

2015-11-13 Thread Kir Kolyshkin

On 11/13/2015 07:03 AM, CoolCold wrote:

Hello!
Looks like ftp.openvz.org resolvs via ipv6 too, but not working well:


Hi,

We are aware of the problem and let the admins know, still waiting for 
them to reply.


Anyway, thanks for reporting

Kir.



root@mu2 /home/coolcold # wget -O -
'http://ftp.openvz.org/debian/archive.key'|apt-key add -
--2015-11-13 17:59:40--  http://ftp.openvz.org/debian/archive.key
Resolving ftp.openvz.org (ftp.openvz.org)... 2620:e6::104:11, 199.115.104.11
Connecting to ftp.openvz.org (ftp.openvz.org)|2620:e6::104:11|:80...
failed: Connection timed out.
Connecting to ftp.openvz.org (ftp.openvz.org)|199.115.104.11|:80... connected.


checking with telnet:
root@mu2 /home/coolcold # telnet -6 ftp.openvz.org 80
Trying 2620:e6::104:11...
telnet: Unable to connect to remote host: Connection timed out


and yandex is ok:
coolcold@mu2:~/gits/flashcache$ telnet -6 www.ya.ru 80
Trying 2a02:6b8::3...
Connected to ya.ru.
Escape character is '^]'.
^]
telnet> quit
Connection closed.


should you disable ipv6 resolve?



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OVZ 7 beta - ploop backup fully functional?

2015-11-04 Thread Kir Kolyshkin



On 11/04/2015 10:16 AM, jjs - mainphrame wrote:

Greetings,

I'm still running OVZ 7 here on 2 machines, one running beta and the 
other running factory. vzctl snapshot hasn't worked on either one, but 
I test at intervals to check the status. The factory machine got some 
updates today, and so I tried again, and things seemed to get a bit 
farther before it failed. An excerpt from near the end of the dump log 
contained some clues:


(00.159708) Dumping ghost file for fd 26 id 0x7c
(00.159712) Error (files-reg.c:422): Can't dump ghost file 
/usr/lib64/libnss3.so;56314442 of 1220096 size, increase limit


For this particular thing there might be a couple of issues filed and 
being worked on, please

search Jira.

The ghost file limit is set in CRIU by

  --ghost-limit sizespecify maximum size of deleted file contents 
to be carried inside an image file


Although I'm not aware how to pass this flag from vzctl.

Also I saw some on-going work about migrating such deleted files outside 
of image (to improve migrate frozen time).

Again, it's either on Jira or on devel@ list (or perhaps both).

(00.159718) Error (cr-dump.c:1314): Dump mappings (pid: 14025) failed 
with -1


Are there any environment variables or command line switches I could 
apply which could affect this "limit" hinted at above?


BTW - criu doesn't seem entirely happy with the ovz 7 kernel:

[root@annie ~]# uname -a
Linux annie.mainphrame.net  
3.10.0-229.7.2.vz7.9.1 #1 SMP Wed Oct 21 17:55:13 MSK 2015 x86_64 
x86_64 x86_64 GNU/Linux

[root@annie ~]# criu check
Error (cr-check.c:602): Kernel doesn't support PTRACE_O_SUSPEND_SECCOMP


This shouldn't bother you as far as I understand as seccomp is not being 
used.



Error (cr-check.c:719): fdinfo doesn't contain the lock field
Error (cr-check.c:749): CLONE_PARENT | CLONE_NEWPID don't work together


For this I hope CRIU guys can shed more light (ccing CRIU list)


[root@annie ~]#

Am I tilting at windmills here, or is there some expectation that any 
of this should be working?


Regards,

Joe







On Wed, Oct 28, 2015 at 2:31 PM, jjs - mainphrame > wrote:


Greetings,

I've been running some test containers on OVZ7 beta, and, while I
realize not all functionality is in place yet, they have been so
dependable that I'm starting to depend on them.

So, understandably, I was looking at backups, and my first tries
with vzctl snapshot failed. I don't want to spend a lot of time
troubleshooting something that's not yet known to be working, but
in case it is, has anyone else had better luck?


Here is the result of my attempt to make a backup:

[root@hachi vz]# vzctl snapshot 1001
Creating snapshot {57c8aa0d-1c04-4469-9d1e-c95fab04bd01}
Storing /local/vz/private/1001/Snapshots.xml.tmp
Setting up checkpoint...
Failed to checkpoint the Container
All dump files and logs were saved to
/local/vz/private/1001/dump/{57c8aa0d-1c04-4469-9d1e-c95fab04bd01}.fail
Checkpointing failed
Failed to create snapshot
[root@hachi vz]#


The last few lines of the dump log contain these clues:

(00.570576) Error (proc_parse.c:404): Can't handle non-regular
mapping on 8366's map 7f2164d63000
(00.570582) Error (cr-dump.c:1487): Collect mappings (pid: 8366)
failed with -1
(00.570746) Unlock network
(00.570751) Running network-unlock scripts


There is no pid 8366 in the CT but on the host that pid
corresponds to the mysql instance running in the container that
I'm trying to snapshot:

[root@hachi vz]# ps -ef|grep 8366
27  83667098  0 14:03 ?00:00:00
/usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql
--plugin-dir=/usr/lib64/mysql/plugin
--log-error=/var/log/mariadb/mariadb.log
--pid-file=/var/run/mariadb/mariadb.pid
--socket=/var/lib/mysql/mysql.sock

So, is this something that ought to work?

Regards,

Joseph




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OVZ 7 beta - ploop backup fully functional?

2015-10-29 Thread Kir Kolyshkin



On 10/28/2015 06:07 PM, jjs - mainphrame wrote:

Hi Kir,

Are you sure that's the right list? I don't see any users there, it's 
all hard core dev, posting patches.


Definitely. Unlike with OpenVZ, where we have users@ and devel@, there's 
only
one mailing list for CRIU, and among the other things it is also used to 
get support.




Before I take the plunge and raise my hand there, I was curious to 
know if any OVZ 7 user had actually gotten vzctl snapshot to work.


Joe

On Wed, Oct 28, 2015 at 4:17 PM, Kir Kolyshkin <mailto:k...@openvz.org>> wrote:


On 10/28/2015 02:31 PM, jjs - mainphrame wrote:

Greetings,

I've been running some test containers on OVZ7 beta, and, while I
realize not all functionality is in place yet, they have been so
dependable that I'm starting to depend on them.

So, understandably, I was looking at backups, and my first tries
with vzctl snapshot failed. I don't want to spend a lot of time
troubleshooting something that's not yet known to be working, but
in case it is, has anyone else had better luck?


Here is the result of my attempt to make a backup:

[root@hachi vz]# vzctl snapshot 1001
Creating snapshot {57c8aa0d-1c04-4469-9d1e-c95fab04bd01}
Storing /local/vz/private/1001/Snapshots.xml.tmp
Setting up checkpoint...
Failed to checkpoint the Container
All dump files and logs were saved to
/local/vz/private/1001/dump/{57c8aa0d-1c04-4469-9d1e-c95fab04bd01}.fail
Checkpointing failed
Failed to create snapshot
[root@hachi vz]#


The last few lines of the dump log contain these clues:

(00.570576) Error (proc_parse.c:404): Can't handle non-regular
mapping on 8366's map 7f2164d63000
(00.570582) Error (cr-dump.c:1487): Collect mappings (pid: 8366)
failed with -1
(00.570746) Unlock network
(00.570751) Running network-unlock scripts


There is no pid 8366 in the CT but on the host that pid
corresponds to the mysql instance running in the container that
I'm trying to snapshot:

[root@hachi vz]# ps -ef|grep 8366
27  83667098  0 14:03 ?  00:00:00 /usr/libexec/mysqld
--basedir=/usr --datadir=/var/lib/mysql
--plugin-dir=/usr/lib64/mysql/plugin
--log-error=/var/log/mariadb/mariadb.log
--pid-file=/var/run/mariadb/mariadb.pid
--socket=/var/lib/mysql/mysql.sock

So, is this something that ought to work?



Hi Joe,

You might want to try writing to criu list with this question,
they might have a clue.
See https://lists.openvz.org/mailman/listinfo/criu

___
Users mailing list
Users@openvz.org <mailto:Users@openvz.org>
https://lists.openvz.org/mailman/listinfo/users




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] migrate fails when fuse mounted

2015-10-28 Thread Kir Kolyshkin



On 10/28/2015 11:56 AM, Nick Knutov wrote:

Yes, this way works, but requires manual actions.

Bad in my case - I'm trying to migrate CT across some nodes for 
transparent load balancing.


It's just a shot in the dark but you might try VZ7 beta with the same setup.
It uses CRIU for checkpointing and live migration and it might support 
fuse as well.


Kir.



28.10.2015 14:48, Сергей Мамонов пишет:

Hello.

Suspend containers not work correctly with fuse.
You can try umount sshfs and try migration again.

2015-10-28 12:22 GMT+03:00 Nick Knutov >:


Hello all,

I have CT with sshfs mounted. When I tried to migrate this CT I got:

Starting live migration of CT ... to ...
OpenVZ is running...
 Checking for CPT version compatibility
 Checking for CPU flags compatibility
Error: Unsupported filesystem fuse
Insufficient CPU capabilities: can't migrate
Error: CPU capabilities check failed!
Error: Destination node CPU is not compatible
Error: Can't continue live migration

Should it be so? What is possible to do with this?

Thanks

-- 
Best Regards,

Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130


___
Users mailing list
Users@openvz.org 
https://lists.openvz.org/mailman/listinfo/users




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OVZ 7 beta - ploop backup fully functional?

2015-10-28 Thread Kir Kolyshkin

On 10/28/2015 02:31 PM, jjs - mainphrame wrote:

Greetings,

I've been running some test containers on OVZ7 beta, and, while I 
realize not all functionality is in place yet, they have been so 
dependable that I'm starting to depend on them.


So, understandably, I was looking at backups, and my first tries with 
vzctl snapshot failed. I don't want to spend a lot of time 
troubleshooting something that's not yet known to be working, but in 
case it is, has anyone else had better luck?



Here is the result of my attempt to make a backup:

[root@hachi vz]# vzctl snapshot 1001
Creating snapshot {57c8aa0d-1c04-4469-9d1e-c95fab04bd01}
Storing /local/vz/private/1001/Snapshots.xml.tmp
Setting up checkpoint...
Failed to checkpoint the Container
All dump files and logs were saved to 
/local/vz/private/1001/dump/{57c8aa0d-1c04-4469-9d1e-c95fab04bd01}.fail

Checkpointing failed
Failed to create snapshot
[root@hachi vz]#


The last few lines of the dump log contain these clues:

(00.570576) Error (proc_parse.c:404): Can't handle non-regular mapping 
on 8366's map 7f2164d63000
(00.570582) Error (cr-dump.c:1487): Collect mappings (pid: 8366) 
failed with -1

(00.570746) Unlock network
(00.570751) Running network-unlock scripts


There is no pid 8366 in the CT but on the host that pid corresponds to 
the mysql instance running in the container that I'm trying to snapshot:


[root@hachi vz]# ps -ef|grep 8366
27  83667098  0 14:03 ?00:00:00 
/usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql 
--plugin-dir=/usr/lib64/mysql/plugin 
--log-error=/var/log/mariadb/mariadb.log 
--pid-file=/var/run/mariadb/mariadb.pid --socket=/var/lib/mysql/mysql.sock


So, is this something that ought to work?



Hi Joe,

You might want to try writing to criu list with this question, they 
might have a clue.

See https://lists.openvz.org/mailman/listinfo/criu
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] migrate fails when fuse mounted

2015-10-28 Thread Kir Kolyshkin

It's just a shot in the dark but you might try VZ7 beta with the same setup.
It uses CRIU for checkpointing and live migration and it might support 
fuse as well.


Kir.

On 10/28/2015 11:56 AM, Nick Knutov wrote:

Yes, this way works, but requires manual actions.

Bad in my case - I'm trying to migrate CT across some nodes for 
transparent load balancing.



28.10.2015 14:48, Сергей Мамонов пишет:

Hello.

Suspend containers not work correctly with fuse.
You can try umount sshfs and try migration again.

2015-10-28 12:22 GMT+03:00 Nick Knutov >:


Hello all,

I have CT with sshfs mounted. When I tried to migrate this CT I got:

Starting live migration of CT ... to ...
OpenVZ is running...
 Checking for CPT version compatibility
 Checking for CPU flags compatibility
Error: Unsupported filesystem fuse
Insufficient CPU capabilities: can't migrate
Error: CPU capabilities check failed!
Error: Destination node CPU is not compatible
Error: Can't continue live migration

Should it be so? What is possible to do with this?

Thanks

-- 
Best Regards,

Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130


___
Users mailing list
Users@openvz.org 
https://lists.openvz.org/mailman/listinfo/users




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


--
Best Regards,
Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] split traffic between two venet

2015-10-27 Thread Kir Kolyshkin



On 10/27/2015 12:26 AM, kna...@gmail.com wrote:

Hello!

I wonder if it is possible to implement the following scenario using 
venet but not veth device.
There is a server with two network interfaces eth0 and eth1. Eth0 is 
connected to public network, eth1 - to private one. There is also 
venet0 interface on that host. CT running on that host has two venet - 
venet0:0 and venet0:1.
I need to route all traffic from/to first venet interface inside CT 
(i.e. venet0:0) to the eth0, and the second one (venet0:1) - to the 
eth1, i.e. completely split public and private traffic.
Maybe there is a way to add one more venetX device on the physical 
server (in addition to the already existing one venet0) and link/map 
them as below:

eth0 <-> venet0 <-> venet0:0
eth1 <-> venet1 <-> venet1:0
 or maybe its possible somehow do the same but with just single venet0?


Traffic is routed according to routing tables. For example, with the 
following setup


On the host:
eth0 112.3.4.5/24
eth1 10.1.2.3/8
default route via eth0

And a container with two venet IPs:
112.3.4.22/24
10.1.3.22/8

Then the traffic to 10.0.0.0/8 will go via eth1, and the rest will go 
via eth0.


In other words, you don't have to do anything special about it, just make
sure you specify the network masks when assigning IPs.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] recalculate quota

2015-10-08 Thread Kir Kolyshkin



On 10/08/2015 11:18 AM, Nick Knutov wrote:
In my case destination host is usually the same, but ve_private is 
different.


It doesn't matter.


It seems there is no way to turn on second quota in that case.


Why? Check vzquota command. You need to create a new simfs mount
and enable vzquota for it (under a different quota id of course).


May be hack with changing CT ID for the copy can help,
but it must be changed back again and this can bring problems.


You don't need to change CTID, just need another quota id.



In more common form: I want to recalculate quotas in case some
files in ve_private was removed from the node side.


You are trying to fix a problem you can have prevented by NOT removing
any files from /vz/private if a container is running.


In less common: I want to move CT from one ve_private to another with
skipping some files. I want to do it like a live migration with near 
zero downtime.


As described above, enable quota before rsync.




08.10.2015 20:29, Kir Kolyshkin пишет:

Case from real life:

vzmigrate (or vzmove, which I plan to release soon) with exclude filter
for rsync to exclude hundreds gigabytes of cache files.


This case is different from what you asked about.

You can turn on quota on destination host before running rsync,
and as you copy the files quota is calculated.

Kir.




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] recalculate quota

2015-10-08 Thread Kir Kolyshkin



On 10/08/2015 06:00 AM, Nick Knutov wrote:

Is it possible to recalculate quota without stopping vds and vzquota drop?


No




Case from real life:

vzmigrate (or vzmove, which I plan to release soon) with exclude filter
for rsync to exclude hundreds gigabytes of cache files.


This case is different from what you asked about.

You can turn on quota on destination host before running rsync,
and as you copy the files quota is calculated.

Kir.

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Website trackers (openvz.org)

2015-09-29 Thread Kir Kolyshkin



On 09/29/2015 05:39 AM, Sergey Bronnikov wrote:

Hi, Narcis

'Like' buttons are useful for those who wants to post link to page
to social networks. But according to my experience there are small amount
of people who uses them.
We decide to remove 'Like' buttons from OpenVZ wiki pages.
They are kept only on pages for old vzquota/ploop/vzctl releases -
https://openvz.org/Special:WhatLinksHere/Template:Like


In fact I don't see "Like" buttons on pages in this list.

I googled around and found that running refreshLinks script[1]
might help. Indeed it helped, although took about 5 minutes to run!

After this, I only saw vzquota and vzstats download pages using Like,
so I edited appropriate templates and removed those, too.

As of now, no pages have this template used, and I removed the
template itself as well as the relevant mediawiki plugin (which btw
was installed exactly 4 years ago).

Note we are still using Google Analytics though.

[1] https://www.mediawiki.org/wiki/Manual:RefreshLinks.php



Sergey

On 16:36 Sat 26 Sep , Narcis Garcia wrote:

Sometime it's time to think in web services, not only in my individual
needings. It's time to think in Free Software (FOSS) model, not only in
free beer. A model to respect users.
I know those tools for HTTP client part, very useful for expert web users.

I hope this debate to produce something more constructive than
obstructive, to improve things and give good example publicly.
If I didn't appreciate this project, I wouldn't explain all this to
enhance openvz.org


El 26/09/15 a les 16:06, spameden ha escrit:

there are bunch of plugins for Firefox to mitigate this:

AdBlock Plus, Ghostery, NoScript.

Check 'em out.

2015-09-26 16:11 GMT+03:00 Narcis Garcia mailto:informat...@actiu.net>>:

 Are you "internet"?
 Or do you have only your own cookies?

 Welcome to the XXI century internet, we are used as a product by
 companies unrelated to websites we visit.


 El 25/09/15 a les 19:30, Todd Mueller ha escrit:
 > Welcome to the Internet, we have cookies.
 >
 > On Fri, Sep 25, 2015 at 10:09 AM, Narcis Garcia mailto:informat...@actiu.net>> wrote:
 >> Off topic:
 >> This wiki is served with, at least, 4 spyware elements:
 >> - Facebook connect
 >> - Google analytics
 >> - Google+ platform
 >> - Twitter button
 >>
 >>
 >> El 25/09/15 a les 13:54, Sergey Bronnikov ha escrit:
 >>> Hello, everyone
 >>>
 >>> some of you asked about feature set of Virtuzzo 7 and difference
 between free
 >>> and commercial versions.  We have prepared table with feature
 comparison for
 >>> OpenVZ -stable, Virtuozzo 7 and other virtualization solutions -
 >>> https://openvz.org/Comparison
 >>>
 >>> Pay attention it is not a final feature set, some features can
 be added in
 >>> future till final release.
 >>>
 >>>
 >>> --
 >>> sergeyb@

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] criu check doesn't like the openvz kernel

2015-09-12 Thread Kir Kolyshkin

It's better to ask this on a CRIU list (adding it).

On 09/12/2015 04:59 PM, jjs - mainphrame wrote:

Greetings,

I gather there are still some fixes to be applied before migration 
will work properly. Just out of curiosity, do any or all of the issues 
below constitute a show stopper?


[root@hachi ~]# criu check
prctl: PR_SET_MM_MAP is not supported, which is required for restoring 
user namespaces

Error (cr-check.c:602): Kernel doesn't support PTRACE_O_SUSPEND_SECCOMP
Error (cr-check.c:703): AIO remap doesn't work properly
Error (cr-check.c:719): fdinfo doesn't contain the lock field
Error (cr-check.c:749): CLONE_PARENT | CLONE_NEWPID don't work together
[root@hachi ~]# uname -a
Linux hachi.mainphrame.net  
3.10.0-123.1.2.vz7.5.29 #1 SMP Fri Jul 24 18:45:12 MSK 2015 x86_64 
x86_64 x86_64 GNU/Linux

[root@hachi ~]#


Joe


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] [Announce] Virtuozzo 7 unstable branch

2015-09-12 Thread Kir Kolyshkin



On 09/11/2015 05:16 PM, jjs - mainphrame wrote:
Meh, I decided to remove the old factory repo file completely and 
force install again. That made it happy.


I guess in such cases you need to run "yum clean metadata" or "yum clean 
all".




Now carrying on as before

Joe

On Fri, Sep 11, 2015 at 4:57 PM, jjs - mainphrame <mailto:j...@mainphrame.com>> wrote:


Hi Kir,

The problem remains -

[root@hachi ~]# for f in /etc/yum.repos.d/*repo; do printf '%s\t'
$f; rpm -qf $f; done
/etc/yum.repos.d/cloudlinux.repo
 cloudlinux-release-7.1-0.1.el7.x86_64
/etc/yum.repos.d/cloudlinux-virtuozzo.repo
 virtuozzo-release-7.0.0-13.vz7.x86_64
/etc/yum.repos.d/factory.repo virtuozzo-release-7.0.0-13.vz7.x86_64
/etc/yum.repos.d/virtuozzo.repo virtuozzo-release-7.0.0-13.vz7.x86_64
[root@hachi ~]#


... and ...


http://kojistorage.eng.sw.ru/mash/latest/latest/x86_64/os/repodata/repomd.xml:
[Errno 14] curl#6 - "Could not resolve host: kojistorage.eng.sw.ru
<http://kojistorage.eng.sw.ru>; Name or service not known"
Trying other mirror.
Loading mirror speeds from cached hostfile
 * virtuozzo-os: download.openvz.org <http://download.openvz.org>
 * virtuozzo-updates: download.openvz.org <http://download.openvz.org>
297 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package rsync.x86_64 0:3.0.9-15.el7 will be updated
---> Package rsync.x86_64 0:3.0.9-15.vz7.7 will be an update
--> Finished Dependency Resolution

Dependencies Resolved



 PackageArchVersionRepositorySize


Updating:
 rsync  x86_64  3.0.9-15.vz7.7 factory
 360 k


Transaction Summary


Upgrade  1 Package

Total download size: 360 k
Downloading packages:
No Presto metadata available for factory
rsync-3.0.9-15.vz7.7.x86_64.rp FAILED

http://kojistorage.eng.sw.ru/mash/latest/latest/x86_64/os/Packages/r/rsync-3.0.9-15.vz7.7.x86_64.rpm:
[Errno 14] curl#6 - "Could not resolve host: kojistorage.eng.sw.ru
<http://kojistorage.eng.sw.ru>; Name or service not known"
Trying other mirror.


Error downloading packages:
  rsync-3.0.9-15.vz7.7.x86_64: [Errno 256] No more mirrors to try.

- it was installed before, but yum distro-sync had removed it.

Either way, the problem remains: kojistorage.eng.sw.ru
<http://kojistorage.eng.sw.ru> is unresolvable, and is the only
mirror listed:


[root@hachi ~]# cat /etc/yum.repos.d/factory.repo
# These repositories are for internal use by developers only
# Enable them on your own risk!!!

[factory]
name=Build Factory packages for Containers
baseurl=http://kojistorage.eng.sw.ru/mash/latest/latest/x86_64/os/
enabled=1
gpgcheck=0

[factory-debuginfo]
name=Debug packages for Containers from Build Factory
baseurl=http://kojistorage.eng.sw.ru/debug/latest/
enabled=0
gpgcheck=0
[root@hachi ~]#

Joe

On Fri, Sep 11, 2015 at 4:30 PM, Kir Kolyshkin mailto:k...@openvz.org>> wrote:



On 09/11/2015 03:43 PM, jjs - mainphrame wrote:

Hi Kirill -

I've only ever gotten any of the packages from
http://download.openvz.org; I've installed the package
"virtuozzo-release-7.0.0-13.vz7.x86_64.rpm" (linked in the
announcement at the head of this thread) on an existing OVZ 7
pre-release.


By looking at the below output, It seems you still have older
virtuozzo-release-7.0.0-11.vz7.x86_64 installed, not 7.0.0-13.
Please upgrade virtuozzo-release to the one from
http://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/
and retry.




Here is the command output:

[root@hachi ~]# for f in /etc/yum.repos.d/*repo; do printf
'%s\t' $f; rpm -qf $f; done
/etc/yum.repos.d/cloudlinux.repo
 cloudlinux-release-7.1-0.1.el7.x86_64
/etc/yum.repos.d/cloudlinux-virtuozzo.repo
 virtuozzo-release-7.0.0-11.vz7.x86_64
/etc/yum.repos.d/factory.repo
virtuozzo-release-7.0.0-11.vz7.x86_64
/etc/yum.repos.d/virtuozzo.repo
virtuozzo-release-7.0.0-11.vz7.x86_64
[root@hachi ~]#






Joe

On Fri, Sep 11, 2015 at 3:18 PM, Kirill Kolyshkin
mailto:kolysh...@gmail.com>> wrote:

Where did you get it from? Please provide the output of

for f in /etc/yum.repos.d/*repo; do printf 

Re: [Users] [Announce] Virtuozzo 7 unstable branch

2015-09-11 Thread Kir Kolyshkin



On 09/11/2015 03:43 PM, jjs - mainphrame wrote:

Hi Kirill -

I've only ever gotten any of the packages from 
http://download.openvz.org; I've installed the package 
"virtuozzo-release-7.0.0-13.vz7.x86_64.rpm" (linked in the 
announcement at the head of this thread) on an existing OVZ 7 pre-release.


By looking at the below output, It seems you still have older
virtuozzo-release-7.0.0-11.vz7.x86_64 installed, not 7.0.0-13.
Please upgrade virtuozzo-release to the one from
http://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/
and retry.



Here is the command output:

[root@hachi ~]# for f in /etc/yum.repos.d/*repo; do printf '%s\t' $f; 
rpm -qf $f; done

/etc/yum.repos.d/cloudlinux.repo  cloudlinux-release-7.1-0.1.el7.x86_64
/etc/yum.repos.d/cloudlinux-virtuozzo.repo 
 virtuozzo-release-7.0.0-11.vz7.x86_64

/etc/yum.repos.d/factory.repo virtuozzo-release-7.0.0-11.vz7.x86_64
/etc/yum.repos.d/virtuozzo.repo virtuozzo-release-7.0.0-11.vz7.x86_64
[root@hachi ~]#






Joe

On Fri, Sep 11, 2015 at 3:18 PM, Kirill Kolyshkin <mailto:kolysh...@gmail.com>> wrote:


Where did you get it from? Please provide the output of

for f in /etc/yum.repos.d/*repo; do printf '%s\t' $f; rpm -qf $f; done

On 11 September 2015 at 14:09, jjs - mainphrame
mailto:j...@mainphrame.com>> wrote:

Could someone perhaps volunteer the IP address of
"kojistorage.eng.sw.ru <http://kojistorage.eng.sw.ru>" to
allow working around the breakage?

...
Downloading packages:
No Presto metadata available for factory
rsync-3.0.9-15.vz7.7.x86_64.rp FAILED

http://kojistorage.eng.sw.ru/mash/latest/latest/x86_64/os/Packages/r/rsync-3.0.9-15.vz7.7.x86_64.rpm:
[Errno 14] curl#6 - "Could not resolve host:
kojistorage.eng.sw.ru <http://kojistorage.eng.sw.ru>; Name or
service not known"
Trying other mirror.


Error downloading packages:
  rsync-3.0.9-15.vz7.7.x86_64: [Errno 256] No more mirrors to try.


On Fri, Sep 11, 2015 at 9:39 AM, Kir Kolyshkin mailto:k...@openvz.org>> wrote:

My interpretation of the below is:

1. There is a "factory" yum repo now available for VZ7.
The meaning of factory
is the same as "rawhide" for Fedora or "sid" for Debian,
in other words this is the
bleeding edge, latest untested packages.

2. In order to use it, you have to do the following:

* Upgrade virtuozzo-release package to the one available in
http://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/

* Enable the factory repo by running "yum-config-manager
--enable factory".

* Run "yum update".

3. If you want to switch back to "latest stable" version:

* Disable the factory repo by running "yum-config-manager
--disable factory".

* Run "yum distro-sync".

4. If yum-config-manager is not available, run "yum
install yum-utils".


Sergey, please let us know if there's anything I
misinterpreted or missed.

Kir.


On 09/11/2015 06:06 AM, Sergey Bronnikov wrote:

Hello, everyone

in July 2015 we have released Virtuozzo 7 Technical
Preview [1]
when we considered branch 7.0-Beta1 as stable.

Beside Beta1 we continue Virtuozzo development in
another branch - 7.0-Beta2.
The main goal for next milestone is working virtual
machines based on KVM
hypervisor. It is highly recommended to test this
branch on some of your
servers as build from this branch will be stable some
time later.

To bring Beta2 packages to installed Virtuozzo 7
instance you should install
package virtuozzo-release on your server and run 'yum
update'.  Take a link to
package 'virtuozzo-release' in factory repository [2].
Don't forget to enable
factory YUM repository before update via yum.
Now installation of package with new release failed
due to conflicts [3]
so probably you will need to use option "--force" on
installation.

YUM repositories


Each Virtuozzo 7 installation instance has amount of
YUM repositories:
- links to stable branch are in file virtuozzo.repo;
- links to unstable branch are i

Re: [Users] [Announce] Virtuozzo 7 unstable branch

2015-09-11 Thread Kir Kolyshkin

My interpretation of the below is:

1. There is a "factory" yum repo now available for VZ7. The meaning of 
factory
is the same as "rawhide" for Fedora or "sid" for Debian, in other words 
this is the

bleeding edge, latest untested packages.

2. In order to use it, you have to do the following:

* Upgrade virtuozzo-release package to the one available in
http://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/

* Enable the factory repo by running "yum-config-manager --enable factory".

* Run "yum update".

3. If you want to switch back to "latest stable" version:

* Disable the factory repo by running "yum-config-manager --disable 
factory".


* Run "yum distro-sync".

4. If yum-config-manager is not available, run "yum install yum-utils".


Sergey, please let us know if there's anything I misinterpreted or missed.

Kir.

On 09/11/2015 06:06 AM, Sergey Bronnikov wrote:

Hello, everyone

in July 2015 we have released Virtuozzo 7 Technical Preview [1]
when we considered branch 7.0-Beta1 as stable.

Beside Beta1 we continue Virtuozzo development in another branch - 7.0-Beta2.
The main goal for next milestone is working virtual machines based on KVM
hypervisor. It is highly recommended to test this branch on some of your
servers as build from this branch will be stable some time later.

To bring Beta2 packages to installed Virtuozzo 7 instance you should install
package virtuozzo-release on your server and run 'yum update'.  Take a link to
package 'virtuozzo-release' in factory repository [2]. Don't forget to enable
factory YUM repository before update via yum.
Now installation of package with new release failed due to conflicts [3]
so probably you will need to use option "--force" on installation.

YUM repositories


Each Virtuozzo 7 installation instance has amount of YUM repositories:
- links to stable branch are in file virtuozzo.repo;
- links to unstable branch are in file factory.repo and disabled by default;

Links
=
1. http://lists.openvz.org/pipermail/announce/2015-July/000617.html
2. http://download.openvz.org/virtuozzo/factory/x86_64/os/Packages/v/
3. https://bugs.openvz.org/browse/OVZ-6532
4. https://openvz.org/Roadmap

--
sergeyb@
___
Announce mailing list
annou...@openvz.org
https://lists.openvz.org/mailman/listinfo/announce


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Server is without network after install

2015-09-08 Thread Kir Kolyshkin



On 09/08/2015 12:47 PM, Volker Janzen wrote:

Hi,

is it possible to view this ticket public? Jira is prompting for a login.


It's working for me even if I'm logged out; the problem
was probably in the wrong link. Here's the correct one:

https://bugs.openvz.org/browse/OVZ-6454




Regards,
 Volker



Am 08.09.2015 um 18:42 schrieb Сергей Мамонов >:



Hello!
It looks like  -  lihttps://bugs.openvz.org/browse/OVZ-6454 
 .


2015-09-08 18:43 GMT+03:00 Volker Janzen >:


Hi,

I used the quick install https://openvz.org/Quick_installation to
install on a CentOS 7.1:

yum localinstall

http://download.openvz.org/virtuozzo/releases/7.0/x86_64/os/Packages/v/virtuozzo-release-7.0.0-10.vz7.x86_64.rpm
yum install -y prlctl prl-disp-service vzkernel
reboot

The kernel 3.10.0-123.1.2.vz7.5.29 is started, but the server has
no active networking and cannot be reached.

Where can I start to search the problem? The server cannot be
accessed physical.


Regards,
  Volker

___
Users mailing list
Users@openvz.org 
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org 
https://lists.openvz.org/mailman/listinfo/users



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] live migration inside one physical node

2015-09-08 Thread Kir Kolyshkin

On 09/08/2015 06:12 AM, Nick Knutov wrote:

  This way takes time. It's definitely not _live_ migration (


You mean, ploop copy is not used? Yes I think you can use ploop copy
for moving the top ploop delta to another location. Here's how:

0. figure out $VE_ROOT and $VE_PRIVATE (use vzlist -H -o root,private 
$CTID).


1. figure out ploop device (/dev/ploopXXX) and top delta (relative to 
VE_PRIVATE)

for the given CT -- reuse get_ploop_info() function from vzmigrate script.

2. copy everything but top delta to new location (in vzmigrate we do
rsync --exclude $VE_PRIVATE/$TOP_DELTA, guess you can do something
similar with cp).

3. run ploop copy; for the local case it will be as easy as

  ploop copy -s $PLOOP_DEV -d $NEW_VE_PRIVATE/$TOP_DELTA -F "vzctl 
suspend $CTID --suspend"


4. move the CT mount to the new place:

 mount --move $VE_PRIVATE $NEW_VE_PRIVATE

5. change VE_PRIVATE:

 vzctl set $CTID --private $NEW_VE_PRIVATE --save

6. restore container

 vzctl suspend $CTID --resume

With these details, you'll be able to write the proper script in no time,
and I'll be happy to review it for you.

Kir.




08.09.2015 14:00, Kevin Holly [Fusl] ?:
> Hi! > > You can just do "vzctl suspend CTID", move the container to 
another place and then restore it there with "vzctl restore CTID" 
after you changed the configuration file. > > On 09/08/2015 07:14 AM, 
Nick Knutov wrote: > > > Is it possible to do live migration between 
physical disks inside one > > physical node? > > > I suppose the 
answer is still no, so the question is what is possible to > > do for 
this? > > > ___ > Users 
mailing list > Users@openvz.org > 
https://lists.openvz.org/mailman/listinfo/users


- -- 
Best Regards,

Nick Knutov
http://knutov.com
ICQ: 272873706
Voice: +7-904-84-23-130
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (MingW32)

iQEcBAEBAgAGBQJV7t7RAAoJELne4KEUgITtOcEH/0R1ACFQgJp6rcJ+2l17JvUp
OdvzmBhRoWiNSHRvxlQTtGatfKY3lCFiFUlDlNu+mdkfQ5HKszuG4LM5n4zQ5brj
MKtbGy7nQX0+9e/lujtLuHPF0tjLgTgevlWibJncvfDnvErvy2cvNyuVoztH9wS1
vXcfexBhRR5pGkJTSNUqBPe/mfN0AkzkmOXGyuAfRPc2r6tx7AgMV90mPyHSaA7s
04ouKfDATOG/ReUbxILabCVttAMlyj1tZvQSOU7S9MrXQ/R5PqN8HS60AcJ6wlFs
Jzij4B53jUnyNrFMX4P8uSbF4rlxF+g2qwv4bdPRRUedgosm2PUvUs1UUVBRgHU=
=xNm3
-END PGP SIGNATURE-



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] No (was Re: bugs.openvz.org down?)

2015-09-04 Thread Kir Kolyshkin

On 09/04/2015 02:54 PM, Kir Kolyshkin wrote:

Is it just me, or bugs.openvz.org is down?


The site it working now, sorry for the noise.



I wanted it to report that I'm not able to cache the debian-8 template:


Filed https://bugs.openvz.org/browse/OVZ-6525

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] bugs.openvz.org down?

2015-09-04 Thread Kir Kolyshkin

Is it just me, or bugs.openvz.org is down?

I wanted it to report that I'm not able to cache the debian-8 template:

TMPL=debian-8.0-x86_64 # Template to use
vzpkg create cache $TMPL
...
...
Setting up python3.4-minimal (3.4.2-1) ...
Fatal Python error: Failed to open /dev/urandom
Aborted
dpkg: error processing package python3.4-minimal (--install):
 subprocess installed post-installation script returned error exit 
status 134

Processing triggers for libc-bin (2.19-18) ...
Errors were encountered while processing:
 python3.4-minimal
Error: /usr/bin/dpkg failed, exitcode=1
Stopping the Container ...
Forcibly stop the Container...
Command failed: Entity is not registered (errcode 3)
Cannot rmdir 
/sys/fs/cgroup/devices/31f7bb0e-1d22-4c5a-bb7b-61af4606f918/system.slice: Device 
or resource busy
Cannot rmdir 
/sys/fs/cgroup/devices/31f7bb0e-1d22-4c5a-bb7b-61af4606f918/: Device or 
resource busy

Container was stopped
Unmount image: /var/tmp//vzpkg.NenVEN/cache-private/root.hdd
Unmounting file system at /var/tmp/vzpkg.NenVEN/cache-root
Unmounting device /dev/ploop50741
Container is unmounted
[root@kir-vz7b-1 ~]# rpm -qa debian\*
debian-8.0-x86_64-ez-7.0.0-2.vz7.noarch

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Show stopper while trying out Openvz 7 beta 1

2015-08-03 Thread Kir Kolyshkin



On 08/03/2015 02:34 AM, Sergey Bronnikov wrote:

Hi

yes, because Virtuozzo doesn't use precreated OpenVZ templates.
Proper way is:
- install required template via yum (CentOS 6 x86_64 installed by default)
- run something like: prlctl create 100 --ostemplate centos-6-x86_64 --vmtype=ct
prlctl will create cache and then create container


As far as I know, vzctl create (as well as prlctl create) should trigger
template metadata installation and "vzpkg create cache". It's not happening,
so looks like a bug.

Just in case -- if I'm wrong and vzctl is not able to trigger template 
metadata

installation and cache creation, the error message should be more
clear.



We will publish template management guide on http://docs.openvz.org/
but right now you can use the same doc from Virtuozzo 6 page -
http://www.odin.com/products/virtuozzo/documentation/

On 11:16 Fri 24 Jul , jjs - mainphrame wrote:

Hello,

Following up, it appears that my only problem was that one now must create
an OS template cache, as in PVC, and the documentation on the wiki did not
yet reflect that fact - but once the template cache is created, everything
falls into place. The old precreated templates are apparently not used.

Regards,

J



On Thu, Jul 23, 2015 at 5:13 PM, jjs - mainphrame 
wrote:


Hello,

it appears the bugzilla is not accepting bug reports for openvz 7 yet - at
least there was no option to select rhel/centos 7 as the platform.

At any rate, when I googled the error, I found several people asking about
this show stopper, but nobody answering. Any pointers as to what I can try
would be appreciated.

[root@annie template]# vzctl create 101 --ostemplate centos-7-x86_64
Unable to get appcache tarball name for ve-basic.conf-sample with
ostemplate centos-7-x86_64
Destroying Container private area: /vz/private/101
Warning: Container is in old data format, unregistration skipped
Container private area was destroyed
Creation of Container private area failed
[root@annie template]# 

Regards,

J


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Kernel 3.10 packages availability

2015-07-27 Thread Kir Kolyshkin

On 07/27/2015 06:15 AM, spameden wrote:
Would be nice to get packaging scripts for linux-image-openvz-amd64 
somehow to try to build 3.10 kernel from the source provided via git 
repository.


I will publish my scripts some time later this week.



2015-07-25 5:58 GMT+03:00 spameden <mailto:spame...@gmail.com>>:




2015-07-25 5:07 GMT+03:00 Kir Kolyshkin mailto:k...@openvz.org>>:



On 07/24/2015 06:38 PM, spameden wrote:

I see there is a 3.10 kernel branch here:
https://src.openvz.org/projects/OVZ/repos/vzkernel/browse

Is it considered stable?


Looks like you missed all the hype about VZ7:
https://lists.openvz.org/pipermail/announce/2015-April/000579.html


Weird, seems I really did miss this announce for some reason.



In short -- this is beta, and RPM packages are available:
https://lists.openvz.org/pipermail/announce/2015-July/000606.html


Alright, i'll give it a shot, thanks.

I think i have somewhere laying scripts to convert RPM->DEB though
not sure if kernel will work on Debian 7 without issues.



As far as I no, currently no DEB packages are planned, but as
we are
the community we can still make it happen!


Second that, maybe Ola Lundqvist will help with debian scripts
used to provide 2.6.32 kernels and other packages automatically
built for debian?


PS please consider subscribing to announce@:
https://lists.openvz.org/mailman/listinfo/announce


Yeah, I'm subscribed already, but thanks anyways!



___
Users mailing list
Users@openvz.org <mailto:Users@openvz.org>
https://lists.openvz.org/mailman/listinfo/users





___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Kernel 3.10 packages availability

2015-07-24 Thread Kir Kolyshkin



On 07/24/2015 06:38 PM, spameden wrote:
I see there is a 3.10 kernel branch here: 
https://src.openvz.org/projects/OVZ/repos/vzkernel/browse


Is it considered stable?


Looks like you missed all the hype about VZ7:
https://lists.openvz.org/pipermail/announce/2015-April/000579.html

In short -- this is beta, and RPM packages are available:
https://lists.openvz.org/pipermail/announce/2015-July/000606.html

As far as I no, currently no DEB packages are planned, but as we are
the community we can still make it happen!

PS please consider subscribing to announce@:
https://lists.openvz.org/mailman/listinfo/announce
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ZFS vs ploop

2015-07-24 Thread Kir Kolyshkin

On 07/24/2015 05:57 PM, Gena Makhomed wrote:
On 25.07.2015 1:06, Kir Kolyshkin wrote: what I am doing wrong, and 
how I can decrease ploop overhead here?


Most probably it's because of filesystem defragmentation (my item #2
above).
We are currently working on that. For example, see this report:

  https://lwn.net/Articles/637428/


This defragmentation tool can be used in case "XFS over ploop over XFS"
for defragmenting both filesystems - inside ploop container and at HN ?


There is no need to defragment the "outer" fs.



Or defragmentation will be used only inside ploop
to align internal filesystem to ploop 1MiB chunks?



This tool is to be used for inner ploop ext4. As a result, the data will 
be less sparse,

there will be more empty blocks for ploop to discard.

I encourage you to experiment with e4defrag2 and post your results here.
Usage is something like this (assuming default ploop cluster size of 1M, and
you have /dev/ploop12345p1 mounted on /vz/root/123):

e4defrag2 -v -d 255 -m -s 8 -q 999 \
 -a $((64*1024)) \
 -c $((1024*1024 * 1)) \
 -t $((60*10)) \
  /dev/ploop12345p1 /vz/root/123

Try to run vzctl compact before and after, check if defrag helps. You 
might want to drop -a option,
or increase argument of -t option. Note I am not the utility author and 
can hardly help much.


Also, you might try to play with ploop cluster block size. Default is 
1M, maybe you'll
have better luck with smaller block size (although it was never tested 
with blocks less
than 1M). Block size (in sectors, default is 2048 i.e. 2048 * 512 = 1M) 
can be specified

with ploop init -b option.

What else is there in ploop? As far as I know, a partition table, and an 
ext4 journal with
a fixed size of 128MB (its overhead is only major if you create pretty 
small ploop images).

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ZFS vs ploop

2015-07-24 Thread Kir Kolyshkin

On 07/24/2015 05:41 AM, Gena Makhomed wrote:



To anyone reading this, there are a few things here worth noting.

a. Such overhead is caused by three things:
1. creating then removing data (vzctl compact takes care of that)
2. filesystem fragmentation (we have some experimental patches to ext4
 plus an ext4 defragmenter to solve it, but currently it's still in
research stage)
3. initial filesystem layout (which depends on initial ext4 fs size,
including inode requirement)

So, #1 is solved, #2 is solvable, and #3 is a limitation of the used
file system and can me mitigated
by properly choosing initial size of a newly created ploop.


this container is compacted every night, during working day
only new static files added to container, this container does
not contain many "creating then removing data" operations.

current state:

on hardware node:

# du -b /vz/private/155/root.hdd
203547480857/vz/private/155/root.hdd

inside container:

# df -B1
Filesystem   1B-blocks  UsedAvailable Use% 
Mounted on

/dev/ploop55410p1 270426705920  163581190144  94476423168  64% /


used space, bytes: 163581190144

image size, bytes: 203547480857

overhead: ~ 37 GiB, ~ 19.6%

container was compacted at 03:00
by command /usr/sbin/vzctl compact 155

run container compacting right now:
9443 clusters have been relocated

result:

used space, bytes: 163604983808

image size, bytes: 193740149529

overhead: ~ 28 GiB, ~ 15.5%

I think this is not good idea run ploop compaction more frequently,
then one time per day at the night - so we need take into account
not minimal value of overhead, but maximal one, after 24 hours
of container work in normal mode - to planning disk space
on hardware node for all ploop images.

so real overhead of ploop can be accounted only
after at lest 24h of container being in running state.


A example of #3 effect is this: if you create a very large filesystem
initially (say, 16TB) and then downsize it (say, to 1TB), filesystem
metadata overhead will be quite big. Same thing happens if you ask
for lots of inodes (here "lots" means more than a default value which
is 1 inode per 16K of disk space). This happens because ext4
filesystem is not designed to shrink. Therefore, to have lowest
possible overhead you have to choose the initial filesystem size
carefully. Yes, this is not a solution but a workaround.


as you can see by inodes:

# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/ploop55410p1 16777216 1198297 15578919 8% /

initial filesystem size was 256 GiB:

c (16777216 * 16 * 1024) / 1024.0/1024.0/1024.0 == 256 GiB.

current filesystem size is also 256 GiB:

# cat /etc/vz/conf/155.conf | grep DISKSPACE
DISKSPACE="268435456:268435456"

so there is no extra "filesystem metadata overhead".


Agree, this looks correct.



what I am doing wrong, and how I can decrease ploop overhead here?


Most probably it's because of filesystem defragmentation (my item #2 above).
We are currently working on that. For example, see this report:

 https://lwn.net/Articles/637428/



I found only one way: migrate to ZFS with turned on lz4 compression.


Also note, that ploop was not designed with any specific filesystem in
mind, it is universal, so #3 can be solved by moving to a different 
fs in the future.


XFS currently not support filesystem snrinking at all:
http://xfs.org/index.php/Shrinking_Support


Actually ext4 doesn't support shrinking either; in ploop we worked around it
using a hidden balloon file. It appears to work pretty well, with the only
downside is if you initially create a very large ploop and then shrink it
considerably, ext4 metadata overhead will be larger.



BTRFS is not production-ready and no other variants
except ext4 are available for using with ploop in near future.


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ZFS vs ploop

2015-07-23 Thread Kir Kolyshkin



On 07/23/2015 06:22 AM, Сергей Мамонов wrote:
And many added to bugzilla. And many already fixed from you and other 
guys from OpenVZ team.
But the all picture, unfortunately, it has not changed cardinally, 
yet. Some people afraid use it, yet.


PS And suspend container failed without iptables-save since 2007 year )
https://bugzilla.openvz.org/show_bug.cgi?id=3154
When with not exist ip6tables-save it work correctly.


All of the official templates have iptables-save binary. If you create
your own templates, please make sure they have that binary
(even it it's just a symlink to /bin/true).

There is no other way to save/restore iptables and the kernel is just 
being strict.

It's a feature, not a bug.

Similar thing, a container without /sbin/init can not be started, and 
it's not a bug.


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ZFS vs ploop

2015-07-23 Thread Kir Kolyshkin
On 22 July 2015 at 19:44, Kir Kolyshkin  wrote:

>
> Next thing, you can actually use shared base deltas for containers, and
> although it is not
> enabled by default, but quite possible and works in practice. The key is
> to create a base delta
> and use it for multiple containers (via hardlinks).
>
> Here is a quick and dirty example:
>
> SRCID=50 # "Donor" container ID
> vztmpl-dl centos-7-x86_64 # to make sure we use the latest
> vzctl create $SRCID --ostemplate centos-7-x86_64
>


Addition to this place:

 vzctl mount $SRCID
 sleep 20
 vzctl umount $SRCID


This is required because ploop uses lazy inode table initialization when
formatting ext4,
and it leads to inode table init (lots of writes) being done later, when
the container is first
mounted. If we don't mount it here and wait till it does its job, the code
below will multiply
the data written by a factor of 1000 which we don't want.

Also, Instead of "sleep 20" we could use something like

 while killall -0 ext4lazyinit; do sleep 1; done

to wait for real until lazy init finishes.



> vzctl snapshot $SRCID
> for CT in $(seq 1000 2000); do \
>   mkdir -p /vz/private/$CT/root.hdd /vz/root/$CT; \
>   ln /vz/private/$SRCID/root.hdd/root.hdd
> /vz/private/$CT/root.hdd/root.hdd; \
>   cp -nr /vz/private/$SRCID/root.hdd /vz/private/$CT/; \
>   cp /etc/vz/conf/$SRCID.conf /etc/vz/conf/$CT.conf; \
>done
> vzctl set $SRCID --disabled yes --save # make sure we don't use it
>
> This will create 1000 containers (so make sure your host have enough RAM),
> each having about 650MB files, so 650GB in total. Host disk space used
> will be
> about 650 + 1000*1 MB before start (i.e. about 2GB) , or about 650 +
> 1000*30 MB
> after start (i.e. about 32GB). So:
>
> real data used inside containers near 650 GB
> real space used on hard disk is near 32 GB
>
> So, 20x disk space savings, and this result is reproducible. Surely it
> will get worse
> over time etc., and this way of using plooop is neither official nor
> supported/recommended,
> but it's not the point here. The points are:
>  - this is a demonstration of what you could do with ploop
>  - this shows why you shouldn't trust any numbers
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ZFS vs ploop

2015-07-23 Thread Kir Kolyshkin

On 07/22/2015 11:59 PM, Сергей Мамонов wrote:

>1. creating then removing data (vzctl compact takes care of that)
>So, #1 is solved

Only partially in fact.
1. Compact "eat"a lot of resources, because of the heavy use of the disk.
2. You need compact your ploop very very regulary.

On our nodes, when we run compact every day, with 3-5T /vz/ daily 
delta about 4-20% of space!

Every day it must clean 300 - 500+ Gb.

And it clean not all, as example -

[root@evo12 ~]# vzctl compact 75685
Trying to find free extents bigger than 0 bytes
Waiting
Call FITRIM, for minlen=33554432
Call FITRIM, for minlen=16777216
Call FITRIM, for minlen=8388608
Call FITRIM, for minlen=4194304
Call FITRIM, for minlen=2097152
Call FITRIM, for minlen=1048576
0 clusters have been relocated
[root@evo12 ~]# ls -lhat /vz/private/75685/root.hdd/root.hdd
-rw--- 1 root root 43G Июл 20 20:45 
/vz/private/75685/root.hdd/root.hdd

[root@evo12 ~]# vzctl exec 75685 df -h /
Filesystem Size  Used Avail Use% Mounted on
/dev/ploop32178p1   50G   26G   21G  56% /
[root@evo12 ~]# vzctl --version
vzctl version 4.9.2


This is either #2 or #3 from my list, or both.



>My point was, the feature works fine for many people despite this bug.

Not fine, but we need it very much for migration and not. So anyway 
whe use it, we have no alternative in fact.
And it one of bugs. Live migration regulary failed, because vzctl 
cannot restore container correctly after suspend.


You really need to file bugs in case you want fixes.


Cpt is pain in fact. But I want to belive, that CRIU fix everything =)

And ext4 only with ploop - not good  case, and not modern case too.
As example on some big nodes we have few /vz/ partition, because raid 
controller cannot push all disk in one raid10 logical device. And few 
/vz/ partition it is not comfortable.

And it is less flexible like one zpool as exapmle.



2015-07-23 5:44 GMT+03:00 Kir Kolyshkin <mailto:k...@openvz.org>>:




On 07/22/2015 10:08 AM, Gena Makhomed wrote:

    On 22.07.2015 8:39, Kir Kolyshkin wrote:

1) currently even suspend/resume not work reliable:
https://bugzilla.openvz.org/show_bug.cgi?id=2470
- I can't suspend and resume containers without bugs.
and as result - I also can't use it for live migration.


Valid point, we need to figure it out. What I don't understand
is how lots of users are enjoying live migration despite
this bug.
Me, personally, I never came across this.


Nevertheless, steps to 100% reproduce bug provided in bugreport.


I was not saying anything about the bug report being bad/incomplete.
My point was, the feature works fine for many people despite this bug.


2) I see in google many bugreports about this feature:
"openvz live migration kernel panic" - so I prefer make
planned downtime of containers at the night instead
of unexpected and very painful kernel panics and
complete reboots in the middle of the working day.
(with data lost, data corruption and other "amenities")


Unlike the previous item, which is valid, this is pure FUD.


Compare two situations:

1) Live migration not used at all

2) Live migration used and containers migrated between HN

In which situation possibility to obtain kernel panic is higher?

If you say "possibility are equals" this means
what OpenVZ live migration code has no errors at all.

Is it feasible? Especially if you see OpenVZ live migration
code volume, code complexity and grandiosity if this task.

If you say "for (1) possibility is lower and for (2)
possibility is higher" - this is the same what I think.

I don't use live migration because I don't want kernel panics.


Following your logic, if you don't want kernel panics, you might want
to not use advanced filesystems such as ZFS, not use containers,
cgroups, namespaces, etc. The ultimate solution here, of course,
is to not use the kernel at all -- this will totally guarantee no
kernel
panics at all, ever.

On a serious note, I find your logic flawed.


And you say what "this is pure FUD" ? Why?


Because it is not based on your experience or correct statistics,
but rather on something you saw on Google followed by some
flawed logic.




4) from technical point of view - it is possible
to do live migration using ZFS, so "live migration"
currently is only one advantage of ploop over ZFS


I wouldn't say so. If you have some real world comparison
of zfs vs ploop, feel free to share. Like density or

Re: [Users] ZFS vs ploop

2015-07-22 Thread Kir Kolyshkin



On 07/22/2015 10:08 AM, Gena Makhomed wrote:

On 22.07.2015 8:39, Kir Kolyshkin wrote:


1) currently even suspend/resume not work reliable:
https://bugzilla.openvz.org/show_bug.cgi?id=2470
- I can't suspend and resume containers without bugs.
and as result - I also can't use it for live migration.


Valid point, we need to figure it out. What I don't understand
is how lots of users are enjoying live migration despite this bug.
Me, personally, I never came across this.


Nevertheless, steps to 100% reproduce bug provided in bugreport.


I was not saying anything about the bug report being bad/incomplete.
My point was, the feature works fine for many people despite this bug.




2) I see in google many bugreports about this feature:
"openvz live migration kernel panic" - so I prefer make
planned downtime of containers at the night instead
of unexpected and very painful kernel panics and
complete reboots in the middle of the working day.
(with data lost, data corruption and other "amenities")


Unlike the previous item, which is valid, this is pure FUD.


Compare two situations:

1) Live migration not used at all

2) Live migration used and containers migrated between HN

In which situation possibility to obtain kernel panic is higher?

If you say "possibility are equals" this means
what OpenVZ live migration code has no errors at all.

Is it feasible? Especially if you see OpenVZ live migration
code volume, code complexity and grandiosity if this task.

If you say "for (1) possibility is lower and for (2)
possibility is higher" - this is the same what I think.

I don't use live migration because I don't want kernel panics.


Following your logic, if you don't want kernel panics, you might want
to not use advanced filesystems such as ZFS, not use containers,
cgroups, namespaces, etc. The ultimate solution here, of course,
is to not use the kernel at all -- this will totally guarantee no kernel
panics at all, ever.

On a serious note, I find your logic flawed.



And you say what "this is pure FUD" ? Why?


Because it is not based on your experience or correct statistics,
but rather on something you saw on Google followed by some
flawed logic.





4) from technical point of view - it is possible
to do live migration using ZFS, so "live migration"
currently is only one advantage of ploop over ZFS


I wouldn't say so. If you have some real world comparison
of zfs vs ploop, feel free to share. Like density or performance
measurements, done in a controlled environment.


Ok.

My experience with ploop:

DISKSPACE limited to 256 GiB, real data used inside container
was near 40-50% of limit 256 GiB, but ploop image is lot bigger,
it use near 256 GiB of space at hardware node. Overhead ~ 50-60%

I found workaround for this: run "/usr/sbin/vzctl compact $CT"
via cron every night, and now ploop image has less overhead.

current state:

on hardware node:

# du -b /vz/private/155/root.hdd
205963399961/vz/private/155/root.hdd

inside container:

# df -B1
Filesystem   1B-blocks  UsedAvailable Use% 
Mounted on

/dev/ploop38149p1 270426705920  163129053184  94928560128  64% /



used space, bytes: 163129053184

image size, bytes: 205963399961

"ext4 over ploop over ext4" solution disk space overhead is near 26%,
or is near 40 GiB, if see this disk space overhead in absolute numbers.

This is main disadvantage of ploop.

And this disadvantage can't be avoided - it is "by design".


To anyone reading this, there are a few things here worth noting.

a. Such overhead is caused by three things:
1. creating then removing data (vzctl compact takes care of that)
2. filesystem fragmentation (we have some experimental patches to ext4
plus an ext4 defragmenter to solve it, but currently it's still in 
research stage)
3. initial filesystem layout (which depends on initial ext4 fs size, 
including inode requirement)


So, #1 is solved, #2 is solvable, and #3 is a limitation of the used 
file system and can me mitigated

by properly choosing initial size of a newly created ploop.

A example of #3 effect is this: if you create a very large filesystem 
initially (say, 16TB) and then
downsize it (say, to 1TB), filesystem metadata overhead will be quite 
big. Same thing happens
if you ask for lots of inodes (here "lots" means more than a default 
value which is 1 inode
per 16K of disk space). This happens because ext4 filesystem is not 
designed to shrink.
Therefore, to have lowest possible overhead you have to choose the 
initial filesystem size

carefully. Yes, this is not a solution but a workaround.

Also note, that ploop was not designed with any specific filesystem in 
mind, it is

universal, so #3 can be solved by moving to a different fs in the future.

Next thing, you can actually use shared base deltas for con

Re: [Users] SIMFS users

2015-07-22 Thread Kir Kolyshkin

On 07/22/2015 11:31 AM, Gena Makhomed wrote:

my point is there will always be bugs... but to point at a bug report
and give up saying that it isn't stable because of bug report x... or
that some people have had panics at some point in history... well,
that isn't very reflective of the overall picture. ! Nothing
personal.  We just disagree on a few topics.  We probably agree on
way more things though.


Yes, you are right, this is not very reflective,
but in first approximation - you can easy evaluate
complexity of code by past bugreports, also evaluating
code quality by cound of vulnerabilities is common practice,
for example, postfix scored as high code quality mail server
and sendmail/exim as low code quality mail servers
only on history of vulnerabilities in the past.


Sure, you can compare similar software, say sendmail vs exim vs postfix,
or Chrome vs Firefox. Even then, though, you can fall into a statistics 
trap --

the number of bugs found also depends on number of users, diversity of
use cases. It depends greatly on how many security experts are looking
into code.

For example, at a conference I heard that KVM appears to be
less secure than Xen, and this conclusion was based on number of 
vulnerabilities
discovered during last 3 years or so. A guy from the audience raised his 
hand,
and said he's a security team leader at Google, and that for the last 
few years
they were looking into KVM, trying to make it more secure, so the higher 
amount

of KVM vulnerabilities is basically the result of his team's work.

So, to conclude, you can compare similar products by number of bugs, but
without compensating for many other factors the results of your comparison
can very well be misleading.

Now, what do you want to compare OpenVZ with? LXC? Linux-VServer?
Upstream kernel? RHEL kernel? Ubuntu kernel?

Speaking of live migration in particular, there are basically only two 
complete

implementations of it, both coming from OpenVZ, so you can only compare
OpenVZ in-kernel checkpoint/restore with OpenVZ Checkpoint/Restore In 
Userspace

aka CRIU. Believe me, we do it on a daily basis :)

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] SIMFS users

2015-07-22 Thread Kir Kolyshkin

On 07/22/2015 11:31 AM, Gena Makhomed wrote:

Regarding OpenVZ checkpoint / restore and live migration... it has

worked well for me since it was originally released in 2007 (or was
it 2008?).  While I've had a few kernel panics in the almost 10 years
I've been using OpenVZ (starting with the EL4-based kernel)... I
can't remember what year I had my last one.


But live migration code from OpenVZ kernel is not included into
mainline kernel. And OpenVZ developers need instead create CRIU.

This means what something is wrong with current live migration code?
Or how you can explain, - why live migration not in mainline kernel?



Thanks for asking.

Have you tried merging some of your kernel patches upstream? If you did 
you probably
know already that the difference of making it happen correlates very 
much with patches
complexity, and in-kernel checkpointing is very complex code, making it 
very hard to merge.


Besides, it touches many kernel subsystems, excluding maybe drivers, and 
makes the code
more complicated, sometimes in unexpected places, which is bad from the 
POV of subsystem

maintainers.

We are not alone here -- a few other guys tried to merge their 
checkpoint/restore code, one
notable example is Oren Laadan who spent a few years trying to achieve 
it. But long story

short, it's impossible for the reasons speicifed earlier.

This led us to re-do the whole thing in userspace, with just a minimal 
intervention from the kernel.
That "minimal" is currently defined as about 150 patches that we merged 
upstream for CRIU to work;

you can see the complete list at http://criu.org/Upstream_kernel_commits

So, the answer to "is your checkpointing code is in the kernel" is "yes, 
it is, it's just we minimized the
kernel code impact by moving most of the stuff to userspace". Yes, we 
upstreamed our checkpoint/restore
implementation (by rewriting it), and now thanks to CRIU/OpenVZ others 
(like Docker, CoreOS, LXC, LXD)
can use checkpoint/restore, its availability is no longer limited to 
OpenVZ users.


Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] SIMFS users

2015-07-22 Thread Kir Kolyshkin



On 07/22/2015 01:21 AM, Pavel Gashev wrote:

On 22/07/15 00:11, "users-boun...@openvz.org on behalf of Kir Kolyshkin"
 wrote:

Other "why not simfs" considerations are listed at
http://openvz.org/Ploop/Why#Before_ploop

That¹s good, but there is an issue with using ploops. If you want to
life-migrate a huge container to another node, it might take hours (or
even days). Life migration might fail at CPT stage. It¹s not an issue with
simfs since you can use the --keep-dst option, and continue migration in
offline mode. Using ploops you have to start migration from the beginning.


No, in fact you don't.

I am not quite sure about Virtuozzo tools, but with my tools (i.e. vzmigrate
from what is now called "legacy OpenVZ") you can use the same "-r no 
--keep-dst"
flags as for simfs, with the very similar effect -- only top delta will 
need to be re-copied

again, for all other deltas rsync will be used just in case of simfs.

Oh, and if the top delta is large, just create a snapshot before trying 
live migration,
or use --snapshot switch to vzmigrate (currently, though, you'll need to 
delete

the snapshot manually after migration -- this is something in my TODO list
I can't find time to implement).


In other words, a ³showstopper² with ploops is that you can¹t continue
stopped/failed migrations.


Thanks for detailed explanation, it makes sense, but definitely not for
OpenVZ -- I am pretty sure it works the way I described above.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] SIMFS users

2015-07-21 Thread Kir Kolyshkin



On 07/21/2015 07:56 PM, Scott Dowdle wrote:

Greetings,

- Original Message -

ZFS is really "The Last Word in File Systems",
and now you can just use it for free,
without reinventing the wheel.

OpenVZ + ZFS or Virtuozzo + ZFS == atom bomb,
killer feature with horrible devastation power.

Or - you just forcing users to migrate from OpenVZ
to CentOS+KVM over ZFS and/or CentOS+Docker over ZFS.

Whatever.  So many ZFS users seem to be such fanatics they will abandon 
anything that gets in its way... while none of the top 10 Linux distros will 
ship it.  Folks like Jesse Smith from Distrowatch say there is nothing wrong 
with distros shipping ZFS... as long as it continues to be packaged separately 
from the kernel (a module rather than compiled in)... but still... no one ships 
it.  I believe Debian is working on changing that and I wish them luck.

I've tried it.  I've read the recipes.  Some say you have to dedicate 1GB of 
RAM for every TB of storage.  To build a high performance ZFS-based fileserver 
you really want to custom design the thing with the right combination of read 
cache disks, write cache disks, etc.  It has compression, encryption, dedup 
(not sure if that is in the Linux version yet), etc.  I'm guessing if you just 
want to ZFS for local stuff (VMs, containers, server applications, etc) you 
don't have to worry as much getting an optimal setup as you would for a 
dedicated fileserver.


I second that. ZFS seems to be pretty hungry for RAM, and the requirements
are even higher if you like deduplication feature to work. That means either
lower container density (you can run less CTs if you use ploop), or 
higher memory

requirements (you need more RAM to run same amount of CTs on ZFS).


I haven't really had a reason to use it.  ZFS + OpenVZ = atomic bomb?  Whatever.

I'd prefer to see BTRFS mature... and once that is in every Linux distro by 
default... and widely deployed... I don't think ZFS will be that relevant 
except among the fanatics.  Now having said that I realize it could take years 
before BTRFS is considered good enough by most folks.  I certain hope it 
doesn't take that long but who knows?  No need to tell me how much BTRFS sucks 
and ZFS rocks.

Please don't provide me with why ZFS is the god of filesystems.  I've heard it 
all before.  If you use and like all of those features and ZFS works great for 
you... go for it... more power to you.

Regarding OpenVZ checkpoint / restore and live migration... it has worked well 
for me since it was originally released in 2007 (or was it 2008?).  While I've 
had a few kernel panics in the almost 10 years I've been using OpenVZ (starting 
with the EL4-based kernel)... I can't remember what year I had my last one.

I see people come into the #openvz IRC channel with bugs all of the time.  The 
vast majority of the time it turns out they are way behind the current stable 
versions of the vzkernel and vzctl.  They really do fix bugs in every release 
so why people seem to think it is ok to ignore updates for months or years is 
beyond me.  I have no idea if you do or not... but hopefully you can feel my 
pain.  Are there bugs in the bug reporting system?  Sure.  People say Debian is 
the most stable Linux distro around (I'm not a Debian user) but if you look in 
their but reporting system I'm sure there are thousands (or more likely, tens 
of thousands) of bug reports.  I guess one should expect that with tens of 
thousands of packages... but my point is there will always be bugs... but to 
point at a bug report and give up saying that it isn't stable because of bug 
report x... or that some people have had panics at some point in history... 
well, that isn't very reflective of the overall picture!

. !

   Nothing personal.  We just disagree on a few topics.  We probably agree on 
way more things though.

I don't use checkpoint and restore directly... but do with vzmigrate.  Every 
time I upgrade vzkernel I use live migration... so I can have as much container 
uptime as possible.  I've had a few times when live migration didn't work but 
in every case it failed safely and I was able to do an offline migration 
instead.  The times it didn't work were when my CPUs differed greatly between 
hosts... or the vzkernel I was running differed too much (I run testing kernels 
on a host or two).  If you don't have a use for offline nor online migration... 
ok... but lots of people do use it... and if ZFS means it can't be used... that 
is just another reason (for me) not to use ZFS.

TYL,




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] live migration (Re: SIMFS users)

2015-07-21 Thread Kir Kolyshkin



On 07/21/2015 06:41 PM, Gena Makhomed wrote:

For one thing, I wonder how you use live migration with zfs,
can you please tell us?


I don't use live migration at all.

several reasons:

1) currently even suspend/resume not work reliable:
https://bugzilla.openvz.org/show_bug.cgi?id=2470
- I can't suspend and resume containers without bugs.
and as result - I also can't use it for live migration.


Valid point, we need to figure it out. What I don't understand
is how lots of users are enjoying live migration despite this bug.
Me, personally, I never came across this.



2) I see in google many bugreports about this feature:
"openvz live migration kernel panic" - so I prefer make
planned downtime of containers at the night instead
of unexpected and very painful kernel panics and
complete reboots in the middle of the working day.
(with data lost, data corruption and other "amenities")


Unlike the previous item, which is valid, this is pure FUD.



3) my current hosting provider don't allow migrating
of IP between different servers in his datacenter.

4) from technical point of view - it is possible
to do live migration using ZFS, so "live migration"
currently is only one advantage of ploop over ZFS


I wouldn't say so. If you have some real world comparison
of zfs vs ploop, feel free to share. Like density or performance
measurements, done in a controlled environment.


- but this is not something like "killer feature",
but only temporary situation. In near future this
imbalance can be fixed, if anyone really need
live migration and ZFS with OpenVZ.
(using ZFS snapshots, for example)

And, as result, in near future:

"simfs over ZFS" always better (for app hosting)
than "ext4 over ploop over ext4"
- so it will be no reasons at all to use ploop.

Also ploop is not in mainline kernel
and probably newer it will be there. (?)

Docker already contains ZFS storage driver.


I am working on a ploop graph driver for Docker.



ZFS already can be used with mainline kernel.


But OpenVZ can't.



With each new version - ZFS is better and better.


Do you imply that ploop is worse and worse with each new version?


ZFS is really "The Last Word in File Systems",
and now you can just use it for free,
without reinventing the wheel.

OpenVZ + ZFS or Virtuozzo + ZFS == atom bomb,
killer feature with horrible devastation power.

Or - you just forcing users to migrate from OpenVZ
to CentOS+KVM over ZFS and/or CentOS+Docker over ZFS.



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] SIMFS users

2015-07-21 Thread Kir Kolyshkin



On 07/21/2015 04:24 PM, Gena Makhomed wrote:

On 22.07.2015 0:11, Kir Kolyshkin wrote:


The biggest problem with simfs appears to be security. We have recently
found a few bugs (not in simfs per se, but in the kernel in general,
i.e. these
are not our bugs for the most part) that can be exploited to escape
the simfs and let container access the host file system. One single bug
like that should have everyone who is at least slightly concerned about
security to move to ploop. And there were a few :(


simfs need for using OpenVZ with ZFS


Other "why not simfs" considerations are listed at
http://openvz.org/Ploop/Why#Before_ploop


there are three levels:

1. before ploop: simfs over ext4
2.   with ploop: ext4 over ploop over ext4
3.  after ploop: simfs over ZFS

comparing (1) and (2) - ploop win,
but compating (2) with (3) - simfs win!


For one thing, I wonder how you use live migration with zfs,
can you please tell us?
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ioacct weirdness?

2015-07-21 Thread Kir Kolyshkin
Just one thing -- ioacct accounts on a VFS level, not on actual disk I/O 
level.
Also, ioacct accounts all reads and writes, including tmpfs and /proc 
and /sys etc.


Say, if you rewrite a file (a real file on disk) a thousand times real 
quick, it does not
mean the data will be written to a disk a thousand times -- more 
probably just once.


Maybe you want to look into iostat rather than ioacct?

https://openvz.org/IO_statistics


On 07/21/2015 12:39 PM, Mark Johanson wrote:

So as it was explained to me before, the vm specific ioacct file resets when 
the vm is restarted. Which seemed to work and make sence until today. I have a 
vm that according to that file has made 403G of writes since it was restarted 
yesterday.

(1 day) VE  403 GB in average daily writes and 403 GB in total since it was 
last rebooted
(51 days) VE  9 GB in average daily writes and 487 GB in total since it was 
last rebooted
(11 days) VE  5 GB in average daily writes and 60 GB in total since it was 
last rebooted
(51 days) VE  2 GB in average daily writes and 132 GB in total since it was 
last rebooted

*above day listings are from me just checking on their uptimes.

My Script runs as:

for veid in `vzlist -Hoveid`; do #run in all running vms
   a=`vzctl exec $veid uptime | awk '{print $3}'` #grab uptime from vm
   b=`grep write /proc/bc/$veid/ioacct | awk '{print $2}'` #grab write 
information from ioacct file for vm
   c=`echo "$b/$a" | bc` #divide writes by days up
   d=`echo "$c/1073741824" | bc` #daily writes in GB
   e=`echo "$b/1073741824" | bc` #total wirtes since uptime in GB
   echo "VE $veid $d GB in average daily writes and $e GB in total since it was last 
rebooted" #spit out results per vm
done

The ioacct file for this vm contains:

   read  121361661952
   write 433471070208
   dirty2439431475200
   cancel   2005960404992
   missed   0
   syncs_total  5
   fsyncs_total179645
   fdatasyncs_total 48312
   range_syncs_total0
   syncs_active 0
   fsyncs_active0
   fdatasyncs_active0
   range_syncs_active   0
   io_pbs   0
   fuse_requests0
   fuse_bytes   0
   
   So based on the write numbers (I do understand that dirty is unwritten and canceled is data not written during a flush) it comes out to 403G in writes since its reboot Mon Jul 20 08:46.
   
   Am I completely off on my math? Seemed to be working just fine until this vm spit out a 403G number. Am I missing something else?

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] SIMFS users

2015-07-21 Thread Kir Kolyshkin



On 07/21/2015 08:51 AM, Michael Stauber wrote:

Hi Scott,


Ummm, you can still access the files inside of ploop-based container
when it isn't running... simply by mounting it.  Is there an issue
with that?

Granted: It's probably more of a psychological or philosophical issue
than a technical one. Filesystem on a filesystem. That adds a level of
complexity and another potential point of failure. Yes, it has benefits.
But ease of access to anything while using simfs often seems to
outweight that. Even if people don't use it for that reason: It adds
some peace of mind that they could - if they wanted to. Potential quota
and inode issues alone won't deter them from using simfs that way.


The biggest problem with simfs appears to be security. We have recently
found a few bugs (not in simfs per se, but in the kernel in general, 
i.e. these

are not our bugs for the most part) that can be exploited to escape
the simfs and let container access the host file system. One single bug
like that should have everyone who is at least slightly concerned about
security to move to ploop. And there were a few :(

Other "why not simfs" considerations are listed at 
http://openvz.org/Ploop/Why#Before_ploop
Of those, the biggest issue is common filesystem journal. A single 
container can issue a lot
of operations (like file truncate) leading to lots of journal updates, 
essentially blocking the rest
of containers to do anything what involves journal. This is bad as it 
breaks inter-container

isolation to some degree.


Think of existing architectures where someone fiddled in some custom
backup scripts to do partial or complete VPS backups.


Well, this is just plain wrong wrong. The correct way to access a VPS 
filesystem (doesn't matter

if this is ploop or simfs or something else) is:

vzctl mount $CTID
cd /vz/root/$CTID # to prevent unmount say if a container is stopped
# work with data at /vz/root/$CTID
vzctl umount $CTID


Sure, it's
possible with ploop, too. But it would require tampering with something
that ain't broken to begin with.


I would say it is broken even for simfs. For example, if you change 
something
in /vz/private/$CTID,  such as add, delete, or modify any files, these 
changes
will not be reflected in vzquota, leading to wrong disk space usage, and 
the limits

won't work as they should.

You would argue that accessing /vz/private/$CTID read-only is OK from 
the perspective
of vzquota -- I'd say it is just a wrong assumption that a container 
files can be accessed
from /vz/private/$CTID -- it just happens to work with current simfs and 
OpenVZ.



Don't get me wrong: I like ploop and have no issue with it. Other
virtualization solutions (containerized or not) also usually have their
own filesystem for VMs or containers and direct access to that from the
HN also needs some crutches or work arounds. Still: I'd be fighting an
uphill battle if I'd were to introduce it as default for my users.
Somehow ploop doesn't provide a "killer-feature" that would help me in
convincing those that either don't know ploop yet, or haven't used it
yet. But maybe someone has some talking points that would help me to win
some hearts and minds there?


http://openvz.org/Ploop/Why

In addition to everything I said earlier, worth noting are ploop snapshots
(and easy consistent backups!), NFS support, and efficient live migration
using ploop copy.



To get back to the original question that Sergey asked: He maybe asked
because they're considering to eventually drop simfs support. Because
that's how I'd test the waters if I were to retire some legacy features
from my own projects. To that I humbly say: Please don't. We like that
ugly duckling and would like to keep it. Alternatively: Give us a really
good reason (or a "killer-feature") that makes it a "must have" item.



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] IPsec

2015-07-14 Thread Kir Kolyshkin

You need to use modprobe not insmod, as the latter
requires path to .ko file, not the module name.

[root@tpad-ovz1 ~]# lsmod | grep af_key
[root@tpad-ovz1 ~]# modprobe af_key
[root@tpad-ovz1 ~]# lsmod | grep af_key
af_key 30067  0
[root@tpad-ovz1 ~]# uname -a
Linux tpad-ovz1 2.6.32-042stab109.4 #1 SMP Fri May 8 15:31:07 MSK 2015 
x86_64 x86_64 x86_64 GNU/Linux

[root@tpad-ovz1 ~]#

On 07/14/2015 03:24 PM, Rene C. wrote:


https://openvz.org/IPsec suggest

  * Kernel 042stab084.8 or later
  * The following kernel modules must be loaded before container start:

|af_key esp4 esp6 xfrm4_mode_tunnel xfrm6_mode_tunnel|


but this throws an error:

insmod: can't read 'af_key': No such file or directory

Kernel is 2.6.32-042stab108.2

What gives?


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Beancounter/Accounting file question

2015-07-08 Thread Kir Kolyshkin



On 07/08/2015 10:11 AM, Mark Johanson wrote:

I was curious when the beancounter/accounting ( /proc/bc ) information is reset?
Ie daily, weekly, monthly, only after reboot, since vm creation, etc?


It is reset once a container is down and usage is zero.


Have one user who appears to have about 532G in writes, which I am pretty sure 
is not a daily, or a weekly. Perhaps monthly?


Not sure what do you mean by "writes".


So I am trying to get some tracking on some of our heavy i/o writing users
(as well as some other items) and can't find information on how often that 
information is reset.



Oh, you probably mean ioacct or iostat. Both are described on the wiki:
 https://openvz.org/IO_statistics
 https://openvz.org/IO_accounting

Same as for beancounters, these are reset once CT is stopped.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Startup problem

2015-07-07 Thread Kir Kolyshkin

On 07/07/2015 02:08 AM, Francois Martin wrote:

I just installed Kolab 3.4 on a VM OpenVZ (CentOS 7.1.1503).
Everything works fine apart from the amavisd service.
It can not start.
Result, emails are blocked in the postfix queue.
If I disable the use of amavis-new in postfix, emails can circulate.

The error code when running the service:
code = exited, status = 227 / NO_NEW_PRIVILEGES

What can I do to fix this ?



I tried to reproduce the problem you have and was only able to do so
when runnning a very old (about 1 year old) OpenVZ kernel 042stab092.3.

When using a newer OpenVZ kernel, 042stab109.4, the problem is gone.

I am not sure what kernel do you run, but I guess you just need to 
upgrade it

(or ask your service provider to do so). If that won't help, try yum update
inside your container.

Thanks,
  Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] distribute template packages with yum

2015-06-22 Thread Kir Kolyshkin

The situation is going to be improved as the package management tools
(earlier known as vzpkg) will be a part of Virtuozzo 7, in fact the sources
are already there:
   https://src.openvz.org/projects/OVZ/repos/vztt/browse

as well as some templates metadata
  https://src.openvz.org/projects/OVZT

Once all this will be ready, metadata will be distributed as rpm packages
and the caches will be created locally (vzpkg cache).

Please don't hold your breath, it's still not ready to be used,
but soon it will be.

On 06/19/2015 01:57 PM, Scott Dowdle wrote:

Greetings,

- Original Message -

for creating rpm-files you need rpmbuild and spec-file.

for example:

$ cat vz-template-centos-7-x86_64-minimal.spec

Name: vz-template-centos-7-x86_64-minimal
Version: 2015.06.16
Release: 1%{?dist}
License: TBD
Summary: TBD
Source0:
http://download.openvz.org/template/precreated/centos-7-x86_64-minimal.tar.gz
%description
TBD
%install
rm -rf $RPM_BUILD_ROOT
install -p -D -m0644 %{SOURCE0}
$RPM_BUILD_ROOT/vz/template/cache/centos-7-x86_64-minimal.tar.gz

%files
%defattr(-,root,root,-)
/vz/template/cache/centos-7-x86_64-minimal.tar.gz

$ rpmbuild -ba vz-template-centos-7-x86_64-minimal.spec

as result you will got

vz-template-centos-7-x86_64-minimal-2015.06.16-1.el7.openvz.x86_64.rpm

and file /vz/template/cache/centos-7-x86_64-minimal.tar.gz
now can be installed, upgraded, removed, ... via rpm and yum.

# yum install vz-template-centos-7-x86_64-minimal

# yum update

When the package is just a big .tar.gz|xz file... and you have to build all new 
rpm files every time there are OS template updates... sounds like a lot of 
overhead.  Feel free to do it for yourself.  No one is going to stop you.


vztmpl-dl is package manager without dependency information provided.
this is not fine.

and updating system to current state requires
run two commands instead of only one: "yum update".
this also is not fine.

Umm, we really do NOT want to encourage people running older kernels and 
versions of the vz utils.  Everyone is recommended to run the latest stable 
releases.

I just use rsync to grab the OS Templates myself.

TYL,


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Panic booting 2.6.32-042stab108.2 on Centos 6.

2015-06-08 Thread Kir Kolyshkin



On 06/07/2015 11:58 PM, Rene C. wrote:

Ok, done.

I noticed in grub.conf that unlike all the previous kernel 
entries, stab108.2 doesn't have any initrd line - is that normal?


No it is not. Maybe you want to file a separate bug for that.

In general there's no need to do full kernel reinstall, as you can add 
the missing line
manually -- if initrd file for the kernel is there, just add the 
appropriate line.


If initrd is not there, you need to generate it first (use dracut), and 
then add the line.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Docker inside Openvz CT: equivalent to systemd params under sysv init?

2015-05-26 Thread Kir Kolyshkin



On 05/24/2015 03:47 AM, Benjamin Henrion wrote:

Hi,

I am trying to make this work:

https://openvz.org/Docker_inside_CT

It mentions:


"Configure custom cgroups in systemd:"

systemd reads /proc/cgroups and mounts all cgroups enabled there,
though it doesn't know there's a restriction that only freezer,devices
and cpuacct,cpu,cpuset can be mounted in container, but not freezer,
cpu etc. separately
vzctl mount $veid
echo "JoinControllers=cpu,cpuacct,cpuset freezer,devices" >>
/vz/root/$veid/etc/systemd/system.conf


I am trying to make it work with a Debian Wheezy CT without systemd,
any idea how to do the same with the old sysvinit?



Something like this should do:

# mount -t tmpfs tmpfs /sys/fs/cgroup
# mkdir /sys/fs/cgroup/freezer,devices
# mount -t cgroup cgroup /sys/fs/cgroup/freezer,devices -o freezer,devices
# mkdir /sys/fs/cgroup/cpu,cpuacct,cpuset
# mount -t cgroup cgroup /sys/fs/cgroup/cpu,cpuacct,cpuset/ -o 
cpu,cpuacct,cpuset


(Surely, this can be put into /etc/fstab to make it permanent)

Note, some other cgroup mounts can be needed, too (memory, blkio).



Otherwise I am hitting that error in the docker daemon log:


ERRO[0040] Handler for POST /containers/{name:.*}/start returned
error: Cannot start container
197ffe32571575340ec3f399349d62c42818bb9c2a45656bfaafc22c68a55055: [8]
System error: mountpoint for cpu not found
ERRO[0040] HTTP Error: statusCode=500 Cannot start container
197ffe32571575340ec3f399349d62c42818bb9c2a45656bfaafc22c68a55055: [8]
System error: mountpoint for cpu not found


Best,

--
Benjamin Henrion 
FFII Brussels - +32-484-566109 - +32-2-4148403
"In July 2005, after several failed attempts to legalise software
patents in Europe, the patent establishment changed its strategy.
Instead of explicitly seeking to sanction the patentability of
software, they are now seeking to create a central European patent
court, which would establish and enforce patentability rules in their
favor, without any possibility of correction by competing courts or
democratically elected legislators."
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] gcc Version

2015-05-20 Thread Kir Kolyshkin



On 05/19/2015 04:56 PM, Gena Makhomed wrote:

On 19.05.2015 2:31, Kir Kolyshkin wrote:


could you please explain the problem in more details?



is there a special reason why you use a different gcc Version for the
kernels you provide
cat /proc/version
Linux version 2.6.32-042stab106.4 (root@kbuild-rh6-x64) (gcc version
4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Fri Mar 27 15:19:28
MSK 2015
that either CentOS6 or RHEL6 ship with:
gcc -v
gcc-Version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC)


problem is difference between gcc 4.4.6 20120305 and gcc 4.4.7-11


Can you please be more specific of what the exact problem is
(unfortunately I forgot my crystal ball at home today)?


gcc 4.4.6-4 contains many bugs, which already fixed in 4.4.7-11

and these gcc bugs can originate bugs in OpenVZ kernel,
compiled with this very old and buggy gcc version.


Theoretically, you are right -- updated gcc version should be better, 
less buggy etc.


Practically, we are not aware of any single issue with gcc 4.4.6 and 
RHEL6-based
kernel -- neither in our kernel, nor in upstream (i.e. RHEL6). We saw 
some bugs
in the past that were caused by gcc miscompiling the kernel source code, 
but it

was always a version of gcc not from RHEL.



more specific:

https://gcc.gnu.org/bugzilla/buglist.cgi?bug_status=RESOLVED&resolution=FIXED&target_milestone=4.4.7 



- list of problem reports (PRs) from GCC's bug tracking system that 
are known to be fixed in the 4.4.7 release. This list might not be 
complete (that is, it is possible that some PRs that have been fixed 
are not listed here)


also see output of # rpm -q --changelog gcc | less
to see Red Hat backports and fixes over gcc version 4.4.7


If you could point out to any fixes made specifically to fix 
kernel-related problems,
we'd be very grateful. Otherwise, this becomes "would be nice to use 
updated gcc"

rather than "it's a must to use updated gcc" sort of thing.

Having said that, we'll probably update our build environments anyway.

Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] OpenVZ templates

2015-05-20 Thread Kir Kolyshkin



On 05/19/2015 04:46 PM, Gena Makhomed wrote:

On 19.05.2015 4:50, Kir Kolyshkin wrote:


In CentOS 7 OpenVZ template also default target is not
multi-user and it should be manually switched via command line:

# systemctl set-default multi-user.target

But why default target in OpenVZ templates is not multi-user.target ?


Please file a bug.


ok, done: https://bugzilla.openvz.org/show_bug.cgi?id=3243

build scripts for creating OpenVZ templates now are close source.


It's not "build scripts", it's the whole vzpkg infrastructure. It's not 
closed source
either, the situation is more complicated. Initially vzpkg appeared in 
OpenVZ and
it was forked in Virtuozzo. Later it went to an abandoned/non-maintained 
state

in OpenVZ because of lack of resources/contrubutors to keep it alive.


are any plans exists to make these build scripts open source ?


Yes, as Virtuozzo userspace will be published, vzpkg will be published 
as well.

This should happen later this year as per our previous announces.




mainly for audit all changes made in OpenVZ templates,
second reason - for creating own OpenVZ templates.

this feature already exists and open source in Docker,
and it is very useful for Docker users.



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] systemctl set-default multi-user.target

2015-05-19 Thread Kir Kolyshkin



On 05/18/2015 05:22 PM, Gena Makhomed wrote:

On 19.05.2015 2:46, Kir Kolyshkin wrote:


Also you probably want to set multi-user as a default systemd target
(if it is not set that way already):

# Set default target as multi-user target
rm -f lib/systemd/system/default.target
ln -s multi-user.target lib/systemd/system/default.target
mkdir -p etc/systemd/system/default.target.wants


In CentOS 7 OpenVZ template also default target is not
multi-user and it should be manually switched via command line:

# systemctl set-default multi-user.target

But why default target in OpenVZ templates is not multi-user.target ?



Please file a bug.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] gcc Version

2015-05-18 Thread Kir Kolyshkin



On 05/18/2015 04:05 PM, Gena Makhomed wrote:

On 18.05.2015 12:08, Vasily Averin wrote:


could you please explain the problem in more details?



is there a special reason why you use a different gcc Version for the
kernels you provide
cat /proc/version
Linux version 2.6.32-042stab106.4 (root@kbuild-rh6-x64) (gcc version
4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Fri Mar 27 15:19:28 
MSK 2015

that either CentOS6 or RHEL6 ship with:
gcc -v
gcc-Version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC)


problem is difference between gcc 4.4.6 20120305 and gcc 4.4.7-11:


Can you please be more specific of what the exact problem is
(unfortunately I forgot my crystal ball at home today)?



# rpm -q --changelog gcc | less

* Wed Aug 27 2014 Marek Polacek  4.4.7-11
- support passing non-POD through ... (#1087806)

* Wed Aug 06 2014 Jakub Jelinek  4.4.7-10
- backport two further OpenMP 4.0 libgomp tasking fixes (#1099549)
- fix scheduler wrong-code with DEBUG_INSNs containing volatile 
ASM_OPERANDS

  (#1127118, PR rtl-optimization/61801)

* Fri Jul 04 2014 Jakub Jelinek  4.4.7-9
- make sure conversion between aggregates with different
  sizes is never considered useless (#1113878)
- fix gfortran ICE on old-style initialization of derived type
  components (#1113793)

* Mon Jun 23 2014 Marek Polacek  4.4.7-8
- fix up last change

* Thu Jun 19 2014 Marek Polacek  4.4.7-7
- limit warning about passing objects of non-POD type (#1087806)

* Wed May 21 2014 Jakub Jelinek  4.4.7-6
- backport OpenMP 4.0 support to libgomp (library only; #1099549,
  PR libgomp/58691)

* Tue Apr 15 2014 Marek Polacek  4.4.7-5
- fix std::uncaught_exception (#1085442, PR c++/59224)
- don't nuke gcov files if any is missing (#1008798)
- don't ICE with missing template-header (#858351, PR c++/54653)
- don't ICE when creating an epilog for reduction (#875472)
- declare type_info (#1061435, PR libstdc++/56468)
- check elements for error_mark_node (#1027003, PR c++/43028)

* Thu Jul 18 2013 Jakub Jelinek  4.4.7-4
- fix diagnostics reporting with digraphs (#906234)
- handle %{, %| and %} in inline asm on multiple asm dialect
  targets as {, | and } rather than asm dialect separators (#908025)
- fix ICE in build_zero_init_1 with va_list data member in a class
  (#921758, PR c++/56403)
- ignore out of range registers in the unwinder (#959564, PR 
target/49146)

- fix ICE in fold_stmt on debug stmts (#967003)

* Thu Oct 18 2012 Jakub Jelinek  4.4.7-3
- don't emit srak instruction on s390{,x} if not compiling for
  z196 or later CPU (#867878)

* Tue Oct 09 2012 Jakub Jelinek  4.4.7-2
- update from gcc-4_4-branch
  - GCC 4.4.7 release
  - fix up -freorder-blocks-and-partition EH handling
(#821901, PR middle-end/45566)
- add -f{,no-}strict-enums option, make -fno-strict-enums the default 
for C++

  (#826882, PR c++/43680)
- lazily declare destructor when needed (#750545)
- fix graphite ICE with VIEW_CONVERT_EXPR (#820281)
- fix power6 ICE with bswap (#819100, PR target/53199)
- handle multilib options in gnatmake (#801144)
- fix up std::__uninitialized_copy (#808590)
- fix ICE in cp_tree_equal (#831832, PR c++/54858)
- use F_SETLKW locking for *.gcda not just in libgcov.a, but also
  in the compiler when reading it (PR gcov-profile/54487)

* Mon Mar 05 2012 Jakub Jelinek  4.4.6-4

=== 





___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Debian Jessie and 'vzctl start $CTID --wait'

2015-05-18 Thread Kir Kolyshkin

On 02/09/2015 02:06 AM, Roman Haefeli wrote:

Hi all

I'm doing some testing with Debian Jessie (8.0) Containers. Debian
Jessie comes with systemd as its default init system. I wonder what is
the correct way to tell vzctl when a container is done starting up. What
was a simple line in /etc/inittab for sysv-init, seems more complex in
systemd. Creating a systemd service unit that touches /.vzfifo is
actually not that difficult. However, I have troubles figuring out how
to make sure vz-init-done.service is started after all other services
have been started.

Has anyone experience with systemd-enabled OpenVZ containers? Are you
using --wait at all?


This is already implemented in official Debian 8 template. This is how 
it's done

(assuming you are in CT root directory):

# Create vzfifo service
cat >> lib/systemd/system/vzfifo.service << EOF
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU General Public License as published by
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.

[Unit]
Description=Tell that Container is started
ConditionPathExists=/proc/vz
ConditionPathExists=!/proc/bc
After=multi-user.target quotaon.service quotacheck.service

[Service]
Type=forking
ExecStart=/bin/touch /.vzfifo
TimeoutSec=0
RemainAfterExit=no
SysVStartPriority=99

[Install]
WantedBy=multi-user.target
EOF

# Enable services
for service in vzfifo; do
systemctl enable $service > /dev/null 2>&1
done

Also you probably want to set multi-user as a default systemd target
(if it is not set that way already):

# Set default target as multi-user target
rm -f lib/systemd/system/default.target
ln -s multi-user.target lib/systemd/system/default.target
mkdir -p etc/systemd/system/default.target.wants

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] gcc Version

2015-05-18 Thread Kir Kolyshkin

On 05/18/2015 06:50 AM, Benjamin Henrion wrote:

On Tue, Apr 14, 2015 at 6:33 PM, Joseph  wrote:

Hi OpenVZ Team,

is there a special reason why you use a different gcc Version for the
kernels you provide
cat /proc/version
Linux version 2.6.32-042stab106.4 (root@kbuild-rh6-x64) (gcc version
4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) ) #1 SMP Fri Mar 27 15:19:28 MSK 2015
that either CentOS6 or RHEL6 ship with:
gcc -v
gcc-Version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC)

4.4 is super old.

I have hit some openvz bugs in the past because the kernel was
compiled with a more recent compiler.



4.4 is the kernel used in Red Hat Enterprise Linux 6 (to compile their
kernel, among the other things). As our kernel is based on RHEL6 kernel,
we use the same gcc version they use to eliminate the possibility of
bugs caused by a different version of gcc.

In general, this is they way RHEL works -- they branch off some version
of software and only use that version (that, naturally, becomes "old" over
time), backporting bug fixes (and sometimes features) from newer versions,
in a hope this way they can eliminate most of the bugs appearing in 
newer versions

of that software. Therefore we have kernel 2.6.32 (which is in fact went
quite far from vanilla 2.6.32, with the only thing left is the version 
number)

and gcc 4.4.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] [Devel] vzctl PRE_CREATE hook

2015-05-18 Thread Kir Kolyshkin

Resending with users@ in Cc.

On 05/15/2015 03:24 PM, Kir Kolyshkin wrote:



On 05/12/2015 02:36 AM, Nikolay Tenev wrote:

Hello devs,

In my project I wanted to make every OpenVZ container to use for a 
private directory (VE_PRIVATE) separated block device (HDD partition, 
lvm volume, NFS share, etc.). To use wrapper script over vzctl was 
one option, but PRE_CREATE hook, in which to create, mkfs and mount 
LVM volume would be even better.


So, I'm not a developer, but using code from POST_CREATE hook I was 
able to create the PRE_CREATE one, which can be used as the other 
hooks e.g.


add in /etc/vz/dists/ default
PRE_CREATE = precreate.sh

and during
vzctl --create ...

it will call /etc/vz/dists/scripts/precreate.sh with VEID as argument


Nope. These scripts are per-distribution scripts, i.e. they are 
targeted for

various distro-specific things, such as setting IP addresses etc.

What you need is a global script, not dependent on CT distro. I suggest
a precreate.sh script similar to prestart.sh one (for details, see commit
https://github.com/kolyshkin/vzctl/commit/0807ef4)



Currently I have a patch to vzctl master branch which implements this 
PRE_CREATE hook and I'm ready to share it.


So my questions are:
- Do you find this for interesting and/or useful?
- If 'yes', what is the right way to send this patch: here, by email; 
or to create pull request in git repo?


The best way would be to redo as advised above and send a patch to 
devel@ list.


Thanks,
  Kir.



Best regards!

Nikolay Tenev




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Solar Designer Audit 2005

2015-05-13 Thread Kir Kolyshkin

On 05/12/2015 02:04 AM, a...@keemail.me wrote:

Hello!
I'm interested in the security audit performed by Solar Designer in 
2005, which is mentioned in the "Security" section of the openvz website.


Is there a reason why it's still not publicly available?


It was never meant to be released to the general public, it was an 
internal audit.


Having said, I can share some details I do remember. It was OpenVZ 
2.6.8-based kernel,
and Solar used a few different techniques, both advanced (like fuzzy 
syscall testing) and
simple (good ol' source code reading). He was able to find one bug 
specific to OpenVZ,
which was immediately fixed, and three security vulnerabilities that 
were not
OpenVZ-specific and came from the upstream kernel -- those were also 
reported,

fixed in upstream and backported to our kernel. That's pretty much it.

Note Solar also uses OpenVZ kernels in Openwall GNU/*/Linux distro 
(http://www.openwall.com/Owl/).


Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] directory /.cpt_hardlink_dir_a920e4ddc233afddc9fb53d26c392319

2015-05-12 Thread Kir Kolyshkin



On 05/12/2015 03:47 AM, Gena Makhomed wrote:

Hello, All!

empty directory like /.cpt_hardlink_dir_a920e4ddc233afddc9fb53d26c392319

inside each container - this is bug or feature ?

if this is bug - it will be fixed in new releases?

if this is feature - how I can use it and how I can disable it?



The answer can easily be obtained by looking into git or bugzilla or 
source code,

but let me save you the hassle.

Yes, this is the correct behavior, explained in the following commit:

https://github.com/kolyshkin/vzctl/commit/09e974fa3ac9c4a1

Currently the only way to disable it, I guess, is to recompile vzctl. Note
you will lose the functionality of checkpointing/suspending the container
that has no free disk space.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Debian 8 packages for OpenVZ as a HN?

2015-05-12 Thread Kir Kolyshkin

On 05/12/2015 02:00 AM, Jonathan Ballet wrote:



Hi,

On 05/10/2015 05:05 PM, Pavel Odintsov wrote:

Well, well. But why my 2.6.32 kernel become broken after change
Wheezy's init system to systemd? Standard 3.2 kernel from Debian works
perfectly with systemd.

If this problems is not related with kdbus but it's still it broke
ability to run OpenVZ kernel on modern distros.

If somebody will fix this issue I will be very pleased.


I tried booting one of the latest OpenVZ release on Debian Jessie with 
systemd, and I got the error:


  Failed to mount /sys/fs/cgroup: No such file or directory.

This is related to this bug :

  https://bugzilla.redhat.com/show_bug.cgi?id=628004

Basically (AFAIU), the OpenVZ kernel lacks support for mounting 
cgroups at the required place, which make systemd choke when starting 
up in the early phase.
As mentioned in the bug report above, applying the patch from [1] 
might be enough to "fix" the problem, but I haven't tested this one 
and I'm not sure about the consequences.


I'm afraid you are wrong here, at least It's not the case for me --
with 2.6.32-042stab108.1 kernel and CentOS 7 OS template.

See:

[root@tpad-ovz1 ~]# vzctl enter 101
entered into CT 101
[root@efgh /]# ls -l /sys/fs
total 0
drwxr-xr-x 10 root root 200 Apr 28 01:04 cgroup
[root@efgh /]# ls -l /sys/fs/cgroup/
total 0
drwxr-xr-x 2 root root  0 Apr 28 01:04 blkio
drwxr-xr-x 2 root root 40 Apr 28 01:04 cpu,cpuacct
drwxr-xr-x 2 root root 40 Apr 28 01:04 cpuset
drwxr-xr-x 2 root root 40 Apr 28 01:04 freezer
drwxr-xr-x 2 root root  0 Apr 28 01:04 memory
drwxr-xr-x 2 root root 40 Apr 28 01:04 net_cls,net_prio
drwxr-xr-x 2 root root 40 Apr 28 01:04 perf_event
drwxr-xr-x 4 root root  0 Apr 28 01:04 systemd

[root@efgh /]# mount | grep cgroup
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup 
(rw,nosuid,nodev,noexec,relatime,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/blkio type cgroup 
(rw,nosuid,nodev,noexec,relatime,blkio,name=beancounter)
cgroup on /sys/fs/cgroup/memory type cgroup 
(rw,nosuid,nodev,noexec,relatime,memory)


[root@efgh /]# logout
exited from CT 101

[root@tpad-ovz1 ~]# vzlist -o ostemplate 101
OSTEMPLATE
centos-7-x86_64-minimal

[root@tpad-ovz1 ~]# uname -a
Linux tpad-ovz1 2.6.32-042stab108.1 #1 SMP Thu Apr 23 19:17:11 MSK 2015 
x86_64 x86_64 x86_64 GNU/Linux





In the meanwhile, I removed systemd from my Jessie installations and 
switched back to sysvinit-core instead. It works, although that would 
be cool to be able to use systemd.


 Jonathan

[1] 
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=676db4af043014e852f67ba0349dae0071bd11f3


On Sun, May 10, 2015 at 2:40 PM, Marco d'Itri 
 wrote:
On May 10, Pavel Odintsov 
 wrote:



Unfortunately, we can't run OpenVZ 2/6/32 kernel on top of systemd
aware system because it lacks kdbus subsystem. But if you changed init

All upstream kernels lack kdbus, and systemd does not depend on it.

--
ciao,
Marco



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] target VE_PRIVATE for vzmigrate

2015-05-07 Thread Kir Kolyshkin



On 05/07/2015 12:17 AM, Nick Knutov wrote:

Hello all,

I see it's possible now to use selected target VE_PRIVATE for vzmigrate
via changing /etc/vz/vz.conf on destination node -
https://bugzilla.openvz.org/show_bug.cgi?id=2523 (and it works - I checked)

But I'd like to specify destination VE_PRIVATE as a parameter to
`vzmigrate`. Is it possbile?
(I know I can edit source, just want to check is it already implemented
while I can't find it)


Should be pretty easy to implement, I suggest options to be named
--remote-root and --remote-private, and in case they are set their
values are to be used as VE_ROOT_REMOTE and VE_PRIVATE_REMOTE,
and also they should be saved to remote ve.conf.

Feel free to send patches.

Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] disable getty start for Debian 8 with systemd

2015-05-04 Thread Kir Kolyshkin



On 05/03/2015 09:14 AM, CoolCold wrote:

Hello!
Just installed Debian 8 "Jessie" in VE. Used almost the same template 
creation script as for Squeeze and started cleaning some things up.


One particular thing I've noted is agetty processes being started for 
tty{1..6} via systemd. Some lurking and asking on #systemd@freenode 
revealed several possibly solutions:
1) completely disable (note that you should use "mask" command) the 
service, like:

systemctl mask getty-static.service
2) use conditional in Unit definition
/lib/systemd/system/getty-static.service
ConditionVirtualization=!openvz

(i'd suggest to use ConditionVirtualization=!lxc for LXC too)

So, knowledge is shared, now the question part.
1) to OpenVZ devs - are there any chances ttys (except console) may be 
needed?


Yes, you can use vzctl console $CTID $TTYNUM to get a console for those.
In our recent templates we have getty on tty1 and tty2 configured
(just in case /dev/tty1 is screwed you can have it on /dev/tty2).

We were disabling it before because there was no /dev/tty virtualization 
and getty

processes kept respawning leading to waste of CPU time and some nasty log
messages ("getty respawining too fast, disabling for 5 minutes" or so if 
I remember

correctly). Nowadays there's no need to disable it.


2) to Ola Lundqvist , what do you think, is it worth to fill the bug 
to Debian's systemd package?




--
Best regards,
[COOLCOLD-RIPN]


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Ploop Incremental backups strategy

2015-04-01 Thread Kir Kolyshkin



On 04/01/2015 05:22 AM, Simon Barrett wrote:

Hi all,

Is there any reason why I should not create snapshot a ploop-backed 
container each day (or hour, for that matter) then merge all 
outstanding snapshots at the end of the week (vzctl snapshot-delete) 
and compact it?  This would allow me to do efficient incremental 
backups to back up (using bacula, in this case) and I would imagine 
it's a core use case.


Are there any potential risks to be aware of (performance issues, 
storage usage, corruption exposure)?


I am not aware of any, except a limitation of no more than 126 deltas 
mounted
(i.e. the distance from top to base delta should be < 127). If you try 
to hit the limit,
snapshot won't be created. So, hourly snapshots with weekly delete won't 
work, as

24*7 > 127.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] !!!WARNING!!! network did not accessible after updating CentOS 7.0 inside container to version CentOS 7.1

2015-03-31 Thread Kir Kolyshkin



On 03/31/2015 02:53 PM, Gena Makhomed wrote:


!!!WARNING!!! network did not accessible
after updating CentOS 7.0 inside container to version CentOS 7.1
via # yum update ; reboot

cause:

# tail /var/log/messages:

network: Bringing up loopback interface:  [  OK  ]
network: Bringing up interface venet0:  arping: Device venet0 not 
available.
network: Determining if ip address 172.22.22.108 is already in use for 
device venet0...

network: arping: Device venet0 not available.
network: ERROR: [/etc/sysconfig/network-scripts/ifup-aliases] 
Error, some other host already uses address 172.22.22.108


/etc/sysconfig/network-scripts/ifup-aliases: Error, some other host 
already uses address 172.22.22.108.




manual workaround: (after each container reboot)

[root@hardware-node]# vzctl enter 108
entered into CT 108
[root@container-108]# ip addr add 172.22.22.108 dev venet0

we need better workaround or complete solution of this regression


A better workaround is to apply this patch:
http://git.openvz.org/?p=vzctl;a=commitdiff;h=24a0a40277542fba5b81
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Docker inside an OpenVZ container

2015-03-24 Thread Kir Kolyshkin

On 03/24/2015 06:37 AM, Pavel Snajdr wrote:

I have two additional questions, I'm not sure whether you're bound by
NDA so that you can/can't answer them, but it would mean a great deal to
me if you could:

- do you guys at Parallels have access to separated-out patches for RHEL
kernels / to their git?
Because if you don't, then you guys are real heros for doing your work
on top of RHEL kernel. And if you do, that would mean that backporting
the shrinker patches from 3.12 shouldn't be too hard - and I would like
to help, if possible.


We don't have access to internal red hat git repo or their split-out 
patches.

Last time I heard from our kernel guys it's not a big deal.

We have a plan to publish our kernel git repo soon, and move the kernel dev
discussions to the public mailing list (I guess devel at openvz.org 
initially).

Once it happens it will be easier for you to join the development.


- do you have an ETA for releasing any RHEL7 vz kernel preview? I would
be interested in testing as soon as you have anything, even if it's the
most unstable thing in the world.


Can't tell any ETAs but it will definitely happen this year.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Automated vzctl compact triggers a kernel panic.

2015-03-23 Thread Kir Kolyshkin

On 03/23/2015 04:45 PM, Jean-Pierre Abboud wrote:

We had an issue this weekend where one of our nodes crashed (kernel
panic) unfortunately due to some issues connecting to DRAC console I was
not able to capture the error.

The timing of the crash coincides with a cron we've setup to run every 4
hours to compact all containers. I know we had an older kernel (328 days
uptime) so I'm wondering if you recommend running the compact command
automatically and at what frequency.

To add to my last email, we do have kernelcare installed so the kernel
was patched.


Currently we are not aware of any open issues similar to yours. It 
doesn't matter

how often do you run compact, it should work without any problems. Making it
less frequent you can hide the problem but it won't be solved, so it's 
not the way

to go. Anyway, without seeing kernel oops I can't say much.

I see two options for you here:

1. Install the latest stable kernel and do a proper reboot (you can live 
migrate

containers to other servers before doing it, and migrate it back after), try
to reproduce the bug, this time catching the kernel messages, file a bug to
bugzilla.

2. Escalate it to kernelcare team -- perhaps they haven't applied or
misapplied some of our patches (this is just my speculation). As you
are paying customer with them they should take you seriously, and
I heard their attitude is very good.

Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Docker inside an OpenVZ container

2015-03-23 Thread Kir Kolyshkin



On 03/23/2015 03:12 AM, Benjamin Henrion wrote:

On Mon, Mar 23, 2015 at 10:55 AM, Narcis Garcia  wrote:

As I read from Ubuntu/Debian package (version 0.9.1):

Docker complements kernel namespacing with a high-level API which
operates at the process level. It runs unix processes with strong
guarantees of isolation and repeatability across servers.

Docker is a great building block for automating distributed systems:
large-scale web deployments, database clusters, continuous deployment
systems, private PaaS, service-oriented architectures, etc.

This package contains the daemon and client. *Using docker.io on
non-amd64 hosts is not supported at this time*. Please be careful when
using it on anything besides amd64.

Also, note that *kernel version 3.8 or above is required* for proper
operation of the daemon process, and that any lower versions may have
subtle and/or glaring issues.

Redhat backported a lot of LXC features to 2.6.32, so that's one of
the reasons you can run docker/lxc on top of the openvz kernel.


In addition to that, we did a significant amount of kernel work
to allow running Docker inside our containers.

In general, OpenVZ kernel version (which is 2.6.32) has very little
to do with vanilla 2.6.32, so this number doesn't really mean anything
except that Red Hat kernel team branched their kernel off this
version when they started working on RHEL6.

Currently this is 2.6.32 plus tons of patches from Red Hat plus
a pretty big patchset from OpenVZ. In particular, we make sure all the
recent distros work inside containers, so sometimes we have to backport
some new syscall or other feature from recent kernels.

From time to time I see people saying OpenVZ kernel is very old and
obsoleted. It happens because the judge by the label, and the label starts
with 2.6.32. Indeed, 2.6.32 is a very old kernel, but as I explained above
our kernel has very little to do with 2.6.32.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] Can't start CT (was: Re: [Announce] Kernel RHEL6 testing 042stab105.13)

2015-03-17 Thread Kir Kolyshkin



On 03/17/2015 11:52 AM, Kir Kolyshkin wrote:



On 03/17/2015 05:57 AM, linuxthefish wrote:

Hello,

Please set OpenVZ to use simfs not buggy ploop, there are no end of
issues with ploop...


Hello Edmund,

I am very sorry to hear you have issues with OpenVZ. I hope you will 
find the following

information helpful.

1. There are two ways to "set OpenVZ to use simfs".
1.1 Default layout (simfs or ploop) for future CTs is set by VE_LAYOUT 
variable in /etc/vz/vz.conf.

1.2 For vzctl create, you can specify --layout option.
For more info on this, please see vzctl(8) and vz.conf(5) man pages.

2. The error you see is not related to ploop in any way, it's pretty 
clear.
It could be something wrong with either files in container (like init 
can't find
configuration and dies) or with CT config (say, resource limits are 
way too tight).
You can try using vzctl console to see the boot process, and check the 
in-CT files

for obvious problems (I'd run rpm -Va in a chroot, for example).

3. Let me assure you that ploop is pretty stable and is used by lots 
of users
on many installations big and small without any major problems. It was 
not set
as a default until proven stable. As you reported you are having no 
end of issues
with ploop, let me suggest you to use https://bugzilla.openvz.org/ to 
report those

(or, if you did, please kindly let me know IDs of bugs you filed).

4. The obvious mistake is you run fsck on the whole device rather than 
a partition,
this is why it can't find the superblock. Also, fsck is run during 
mount so it would

tell you if something is wrong.

5. Please don't use any personal emails for it, it's a  bad practice, as
5.1. developers to users ratio is very far from being 1:1 -- it's not 
scalable
5.2. Other users might know answers to your question, help with your 
problem

5.3. Other users might benefit from the discussion now or in the future

6. If you need help, possible options are listed at 
http://openvz.org/Support.

I would start with users@ mailing list (which I Cc this reply to).


One more thing I forgot to add -- replying to an irrelevant email is not a
good email practice. It's even worse if you don't change the email subject
(I have to admit I fell for the last one, too -- fixing it now).

Kir.



Finally, thank you for using OpenVZ!

Regards,
  Kir.



[root@tx1v2 ~]# vzctl start 14082
Starting container...
Unmounting file system at /vz/root/14082
Unmounting device /dev/ploop39756
Container is unmounted
Opening delta /vz/private/14082/root.hdd/root.hdd
Adding delta dev=/dev/ploop39756 
img=/vz/private/14082/root.hdd/root.hdd (rw)
Mounting /dev/ploop39756p1 at /vz/root/14082 fstype=ext4 
data='balloon_ino=12,'

Container is mounted
Adding IP address(es): 23.95.108.76 23.95.108.77 23.95.108.80 
23.95.108.81

Setting CPU limit: 400
Setting CPU units: 1000
Setting CPUs: 4
Container start failed (try to check kernel messages, e.g. "dmesg | 
tail")

Killing container ...
Container was stopped
Unmounting file system at /vz/root/14082
Unmounting device /dev/ploop39756
Container is unmounted
[root@tx1v2 ~]# dmesg | tail
[5939781.955260] CT: 14082: started
[5939782.521456] CT: 14082: stopped
[5939925.659518] CT: 14082: started
[5939930.024759] CT: 14082: stopped
[5939972.917881]  ploop39756: p1
[5939973.128541]  ploop39756: p1
[5939973.557214] EXT4-fs (ploop39756p1): mounted filesystem with
ordered data mode. Opts:
[5939973.574258] EXT4-fs (ploop39756p1): loaded balloon from 12 (0 
blocks)

[5939973.581336] CT: 14082: started
[5939978.004668] CT: 14082: stopped
[root@tx1v2 ~]# ploop mount 
/vz/private/14082/root.hdd/DiskDescriptor.xml

Opening delta /vz/private/14082/root.hdd/root.hdd
Adding delta dev=/dev/ploop39756 
img=/vz/private/14082/root.hdd/root.hdd (rw)

[root@tx1v2 ~]# fdisk -l /dev/ploop39756

WARNING: GPT (GUID Partition Table) detected on '/dev/ploop39756'! The
util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/ploop39756: 161.1 GB, 161061273600 bytes
255 heads, 63 sectors/track, 19581 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x

Device Boot  Start End  Blocks   Id System
/dev/ploop39756p1   1   19582   157286399+  ee GPT
Partition 1 does not start on physical sector boundary.
[root@tx1v2 ~]# e2fsck /dev/ploop39756
e2fsck 1.41.12 (17-May-2010)
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open 
/dev/ploop39756


The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate 
superblock:

 e2fsck -b 8193 


Re: [Users] [Announce] Kernel RHEL6 testing 042stab105.13

2015-03-17 Thread Kir Kolyshkin
e device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
 e2fsck -b 8193 

Thanks,
Edmund

On 11 March 2015 at 22:08, Kir Kolyshkin  wrote:

OpenVZ project released an updated RHEL6 based testing kernel.
Read below for more information.

NOTE: it is recommended to test this kernel on some of your servers
as it (or its successor) will appear in rhel6 (stable) branch
some time later.


Changes and Download

* Stability and Docker related fixes

For detailed changelog and downloads, see:
https://openvz.org/Download/kernel/rhel6-testing/042stab105.13


Bug reporting
=
Use http://bugzilla.openvz.org/  to report any bugs found.


Other sources of info on updates

See http://wiki.openvz.org/News  to view all the news (including updates)
online. There you can also find RSS/Atom feed links.


Regards,
   OpenVZ team
___
Announce mailing list
annou...@openvz.org
https://lists.openvz.org/mailman/listinfo/announce


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Upgrade libc6 on containers deployed with old Ubuntu 12.04 template

2015-01-29 Thread Kir Kolyshkin


On 01/29/2015 11:18 AM, Ovidiu Calbajos wrote:

Hello,

Today one of our customers informed us that his container was not able 
to upgrade to the latest libc6 in order to avoid the GHOST 
vulnerability CVE-2015-0235. Searching the internet was without any 
positive result. What I found out was that libc-bin, libc6, libc6-dev 
and libc-dev-bin were configured to be downloaded from a PPA( 
http://ppa.launchpad.net/izx/ovz-libc/ubuntu ) and not from the 
Canonical's update servers. So what was to remove the files 
/etc/apt/sources.list.d/izx-ovz-libc-precise.list and 
/etc/apt/preferences.d/99ovz-libc-pin and do an update/upgrade to the 
VPS which resulted in updating the libc6 to the latest version 
available for Ubuntu 12.04 which is 2.15-0ubuntu10.10.
Hopefully this message will help other users update the libc6 library 
on their containers or customer's containers.


Best regards,
Ovidiu Calbajos



Interesting. Where did you get that template? Official OpenVZ template
doesn't have this PPA configured, using standard glibc from the default 
repo.


Kir.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] simfs with ploop?

2015-01-21 Thread Kir Kolyshkin
no=12,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group
0 0
/dev/simfs /backup simfs rw,relatime 0 0
proc /proc proc rw,relatime 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
none /dev tmpfs rw,relatime,size=7992992k,nr_inodes=1998248 0 0
none /dev/pts devpts rw,relatime,mode=600,ptmxmode=000 0 0
root@vps2202 [~]# ll /dev/ploop50951p1
/bin/ls: /dev/ploop50951p1: No such file or directory

There are quite a few /dev/ploop* devices under /dev, but not the
one linked in /proc/mounts.

This goes for all containers on this hardware node.  Other nodes
not yet upgraded to the latest kernel do not have this problem.

Any ideas?





___
Users mailing list
Users@openvz.org <mailto:Users@openvz.org>
https://lists.openvz.org/mailman/listinfo/users
Kir Kolyshkin <mailto:k...@openvz.org>
Friday, December 26, 2014 6:31 PM



No, the script (and its appropriate symlinks) is (re)created on
every start (actually mount)
of a simfs-based container. It is a conversion process that
should get rid of it, unfortunately
vzctl doesn't do it currently, so has to be done manually. Feel
free to file a bug for vzctl.

Kir.

___
Users mailing list
Users@openvz.org <mailto:Users@openvz.org>
https://lists.openvz.org/mailman/listinfo/users
Scott Dowdle <mailto:dow...@montanalinux.org>
Friday, December 26, 2014 12:46 PM
Greetings,

- Original Message -

What I understood Kir to say was that the script was created as
part of the conversion process and should have been automatically
removed (like it was automaically created) after the conversion
was complete. Why it wasn't removed I don't know... but you can
back up the file somewhere else... and remove it... and if you
have problems... you could copy it back. I don't think any of
that would be necessary but who knows.

TYL,


___
Users mailing list
Users@openvz.org <mailto:Users@openvz.org>
https://lists.openvz.org/mailman/listinfo/users




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ext4 extents problem with ploop

2015-01-19 Thread Kir Kolyshkin


On 01/19/2015 04:38 AM, Daniel Thielking wrote:

Hi,

I have a problem with ploop devices.
If i try to mount a ploop I get following error message:

/Error in check_ext4_mount_restrictions (ploop.c:1714): The ploop
image can not be used on ext3 or ext4 file system without extents/


My mount command is as followed:

/ploop mount -m /vz/ploop ./root.hdd/DiskDescrip//tor.xm//l/



Does this ./root.hdd/DiskDescriptor.xml (and all the delta files it 
refers to)

are also on /vz? It's not clear from the command you provided.

Also, mount point doesn't matter -- it's only the location of delta 
files that does.

All deltas should be on ext4.



I am using CentOS 6.6 and my vz partition is in ext4 formatted.
Why I get this error Message?

Thank You Guys
Daniel
--
_

Auszubildender Fachinformatiker für Systemintegration
RWTH Aachen
Lehrstuhl für Integrierte Analogschaltungen
Raum 24C 313
Walter-Schottky-Haus
Sommerfeldstr. 24
D-52074 Aachen

www.ias.rwth-aachen.de

Email:daniel.thielk...@ias.rwth-aachen.de
Phone: +49-(0)241-80-27771
   FAX: +49-(0)241-80-627771
_


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] ksplice support

2015-01-11 Thread Kir Kolyshkin


On 01/11/2013 11:05 PM, Rene C. wrote:
Does the current 2.6.32-042stab065.3 kernel support ksplice or is 
there any plans for this?




I am not sure if there are any ksplice guys on this list. Probably not, 
so ask KSplice directly.


Kir
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] [Announce] Kernel RHEL6 042stab102.9 (moved to stable)

2015-01-07 Thread Kir Kolyshkin

Hi,

On 01/07/2015 09:22 AM, linuxthefish wrote:

Hello,

Update to this version does not work, I'm stuck on
vzkernel-2.6.32-042stab094.7.x86_64. Is this version stable to use?

[root@ssdla1 ~]# yum install vzkernel
Loaded plugins: fastestmirror
Setting up Install Process
Loading mirror speeds from cached hostfile
  * base: mirror.san.fastserv.com
  * extras: repos.lax.quadranet.com
  * openvz-kernel-rhel6: mirror.supremebytes.com
  * openvz-utils: mirror.supremebytes.com
  * updates: mirrors.easynews.com
Package vzkernel-2.6.32-042stab094.7.x86_64 already installed and latest version
Nothing to do

Thanks,
Edmund


This is a problem with the stale (not being updated) mirror 
(mirror.supremebytes.com).
It has been removed from the mirror list so you can retry with something 
like


yum clean all
yum install vzkernel

PS please don't ask questions to private emails -- we have users@ list 
for it.


On 24 December 2014 at 03:15, Kir Kolyshkin  wrote:

OpenVZ project released an updated RHEL6 based kernel.
Read below for more information. Everyone is advised to update.


Changes and Download

(since 042stab094.8)
* Rebase to RHEL6.6 kernel
* Fixes and improvements

https://openvz.org/Download/kernel/rhel6/042stab102.9


Bug reporting
=
Use http://bugzilla.openvz.org/  to report any bugs found.


Other sources of info on updates

See http://wiki.openvz.org/News  to view all the news (including updates)
online. There you can also find RSS/Atom feed links.


Regards,
   OpenVZ team
___
Announce mailing list
annou...@openvz.org
https://lists.openvz.org/mailman/listinfo/announce


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] The future of OpenVZ: Virtuozzo Core

2014-12-27 Thread Kir Kolyshkin


On 12/27/2014 03:12 AM, Benjamin Henrion wrote:

On Sat, Dec 27, 2014 at 11:34 AM, Pavel Odintsov
 wrote:

Awesome Is it possible to join to development committee?

Any chance to see openvz against mainline one day?


Maybe this is rarely visible, but we are working hard on it for the last 
9 years.
I think it is correct to say Parallels/OpenVZ is the single biggest 
contributor to the
Linux Containers kernel code. Yes, we contributed more than Google, IBM 
or other
big guys. In terms of patches, it is well over 1500. The biggest 
features are:

 - net and PID namespaces,
 - CRIU kernel support (containers checkpoint/restore and more)
 - kernel memory accounting (still in progress).
 - (I guess I am missing something else)

We are working on the rest of it (and we hope the process will accelerate
with vzcore).

As to why such contribution is not very visible, my guess is people are 
not using
kernel directly, but relying on userspace tools such as lxc, vzctl and 
docker. So,
those tools are the only obvious component, with the Linux kernel doing 
its work

in background without much fanfares.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


[Users] The future of OpenVZ: Virtuozzo Core

2014-12-26 Thread Kir Kolyshkin

Please  read this very important announce:

http://openvz.livejournal.com/49158.html

Happy New Year,
  OpenVZ team.
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] simfs with ploop?

2014-12-26 Thread Kir Kolyshkin


On 12/26/2014 09:46 AM, Scott Dowdle wrote:

Greetings,

- Original Message -

I'm still waiting to hear what is the PROPER way of discarding this
script. Just deleting the base file will cause a large number of
symlinks to become orphans.

What I understood Kir to say was that the script was created as part of the 
conversion process and should have been automatically removed (like it was 
automaically created) after the conversion was complete.  Why it wasn't removed 
I don't know... but you can back up the file somewhere else... and remove it... 
and if you have problems... you could copy it back.  I don't think any of that 
would be necessary but who knows.


No, the script (and its appropriate symlinks) is (re)created on every 
start (actually mount)
of a simfs-based container. It is a conversion process that should get 
rid of it, unfortunately
vzctl doesn't do it currently, so has to be done manually. Feel free to 
file a bug for vzctl.


Kir.

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] simfs with ploop?

2014-12-04 Thread Kir Kolyshkin

I think the script left after conversion. It is no longer needed,
you can remove it (and as far as I can tell it won't be recreated
by vzctl if ploop is used).

On 12/04/2014 02:22 PM, Rene C. wrote:


It seems /etc/rc.d/init.d/vzquota somehow is responsible for this problem:

# cat -n /etc/rc.d/init.d/vzquota
 1  #!/bin/sh
 2  # chkconfig: 2345 10 90
 3  # description: prepare container to use OpenVZ 2nd-level disk 
quotas

 4
 5  ### BEGIN INIT INFO
 6  # Provides: vzquota
 7  # Required-Start: $local_fs $time $syslog
 8  # Required-Stop: $local_fs
 9  # Default-Start: 2 3 4 5
10  # Default-Stop: 0 1 6
11  # Short-Description: Start vzquota at the end of boot
12  # Description: Configure OpenVZ disk quota for a container.
13  ### END INIT INFO
14
15  start() {
16  if [ ! -L /etc/mtab ]; then
17  rm -f /etc/mtab >/dev/null 2>&1
18  ln -sf /proc/mounts /etc/mtab
19  fi
20  dev=$(awk '($2 == "/") && ($4 ~ /usrquota/) && ($4 ~ 
/grpquota/) {print $1}' /etc/mtab)

21  if test -z "$dev"; then
22  dev="/dev/simfs"
23  rm -f /etc/mtab >/dev/null 2>&1
24  echo "/dev/simfs / reiserfs 
rw,usrquota,grpquota 0 0" > /etc/mtab
25  grep -v " / " /proc/mounts >> /etc/mtab 
2>/dev/null

26  chmod 644 /etc/mtab
27  fi
28  [ -e "$dev" ] || mknod $dev b 0 60
29  quotaon -aug
30  }
31
32  case "$1" in
33start)
34  start
35  ;;
36*)
37  exit
38  esac


So it seems in line 16-19 first the symlink from /proc/mounts to 
/etc/mtab is made correctly, but then in line 20 the '/' partition is 
checked for keywords /usrquota/ and /grpquota/, and if they're missing 
it then overwrites the symlink again. AFAICS this is what is breaking 
the symlink.


Whats the problem with this? Should this script be changed for ploop 
containers?



On Sun, Nov 30, 2014 at 12:07 PM, Rene C. <mailto:ope...@dokbua.com>> wrote:


I've just noticed this happen also in a Plesk container, so it's
not just limited to Cpanel.  The date seems to correspond to a
restore that was made so it may be related to the
vzpbackup/vzprestore scripts.


On Sat, Jan 4, 2014 at 8:34 AM, Rene C. mailto:ope...@dokbua.com>> wrote:

Oddly enough this problem has returned - it now shows simfs again.
Nobody has explicitly changed anything. The container runs Cpanel,
could that have messed things up? It actually looks like cpanel
overrides the symlink when they add their jailshell. Have you
noticed
this before?

/proc/mounts looks correct.



On Wed, Dec 18, 2013 at 10:25 AM, Rene C. mailto:ope...@dokbua.com>> wrote:
> Thanks, now I understand.
>
> So after deleting /etc/mtab I need to make as symlink from
> /proc/mounts (ln -s /proc/mounts /etc/mtab). That was the
bit I was
> missing.
>
>
> On Wed, Dec 18, 2013 at 1:36 AM, Kir Kolyshkin
mailto:k...@openvz.org>> wrote:
>> On 12/17/2013 12:27 AM, Rene C. wrote:
>>>
>>> Sure 
>>>
>>> # ls -l /vza1/private/1703
>>> total 16
>>> drwxr-xr-x 2 root root 4096 Dec 15 04:10 dump
>>> drwx-- 3 root root 4096 Dec 15 04:10 root.hdd
>>> -rw-r--r-- 1 root root  443 Dec 15 04:09 Snapshots.xml
>>> -rw-r--r-- 1 root root   37 Dec 15 03:06 vzpbackup_snapshot
>>> # vzlist -o layout 1703
>>> LAYOUT
>>> ploop
>>> #
>>>
>>> Surely deleting the mount table isn't healthy, isn't it
needed by
>>> something?
>>
>>
>> This is how it should look like:
>>
>> [host] # vzctl enter 1018
>> entered into CT 1018
>> [CT1018] # ls -l /etc/mtab
>> lrwxrwxrwx 1 root root 12 Nov  7 21:20 /etc/mtab ->
/proc/mounts
>>
>> And /proc/mounts is showing info from the kernel which
can't be wrong:
>>
>> [CT1018]# cat /proc/mounts
>> /dev/ploop51540p1 / ext4
rw,relatime,barrier=1,data=ordered,balloon_ino=12 0
>> 0
>> proc /proc proc rw,relatime 0 0
>> sysfs /sys sysfs rw,relatime 0 0
>> non

Re: [Users] Live Migration Optimal execution

2014-11-28 Thread Kir Kolyshkin


On 11/28/2014 12:47 AM, Pavel Odintsov wrote:

Nice suggestion! Old fashioned UBC is real nightmare and was deprecated.


In fact they are not deprecated, but rather optional. The beauty of it
is you can still limit say network buffers or number of processes (or
have out of memory killer guarantee) -- but you don't
HAVE to do it, as the default setup (only setting ram/swap) should be
secure enough.

Kir.



On Fri, Nov 28, 2014 at 10:35 AM, CoolCold  wrote:

Hello!
I'd recommend set only ram/swap limits (via --ram/--swap) , letting other
settings be mostly unlimited (while ram/swap limits are not overflowed of
course) - http://wiki.openvz.org/VSwap

On Fri, Nov 28, 2014 at 3:31 AM, Nipun Arora 
wrote:

Nevermind, I figured it out by changing the fail counters in
/proc/user_beans

Thanks
Nipun

On Thu, Nov 27, 2014 at 7:14 PM, Nipun Arora 
wrote:

Thanks, the speed is improved by an order of magnitude :)

btw. is there any benchmark, that you all have looked into for testing
how good/practical live migration is for real-world systems?
Additionally, I'm trying to run a java application(dacapo benchmark), but
keep having trouble in getting java to run..

java -version

Error occurred during initialization of VM

Could not reserve enough space for object heap

Could not create the Java virtual machine.


I've put my vz conf file below, can anyone suggest what could be the
problem?

Thanks
Nipun

# UBC parameters (in form of barrier:limit)

KMEMSIZE="14372700:14790164"

LOCKEDPAGES="2048:2048"

PRIVVMPAGES="65536:69632"

SHMPAGES="21504:21504"

NUMPROC="240:240"

PHYSPAGES="0:131072"

VMGUARPAGES="33792:unlimited"

OOMGUARPAGES="26112:unlimited"

NUMTCPSOCK="360:360"

NUMFLOCK="188:206"

NUMPTY="16:16"

NUMSIGINFO="256:256"

TCPSNDBUF="1720320:2703360"

TCPRCVBUF="1720320:2703360"

OTHERSOCKBUF="1126080:2097152"

DGRAMRCVBUF="262144:262144"

NUMOTHERSOCK="1200"

DCACHESIZE="3409920:3624960"

NUMFILE="9312:9312"

AVNUMPROC="180:180"

NUMIPTENT="128:128"


# Disk quota parameters (in form of softlimit:hardlimit)

DISKSPACE="3145728:3145728"

DISKINODES="131072:144179"

QUOTATIME="0"


# CPU fair scheduler parameter

CPUUNITS="1000"


NETFILTER="stateless"

VE_ROOT="/vz/root/101"

VE_PRIVATE="/vz/private/101"

OSTEMPLATE="centos-6-x86_64"

ORIGIN_SAMPLE="basic"

HOSTNAME="test"

IP_ADDRESS="192.168.1.101"

NAMESERVER="8.8.8.8 8.8.4.4"

CPULIMIT="25"

SWAPPAGES="0:262144"



On Mon, Nov 24, 2014 at 12:16 PM, Kir Kolyshkin  wrote:


On 11/23/2014 07:13 PM, Nipun Arora wrote:

Thanks, I will try your suggestions, and get back to you.
btw... any idea what could be used to share the base image on both
containers?
Like hardlink it in what way? Once both containers start, won't they
have to write to different locations?


ploop is composed as a set of stacked images, with all of them but the
top one being read-only.


I understand that some file systems have a copy on write mechanism,
where after a snapshot all future writes are written to a additional linked
disks.
Does ploop operate in a similar way?


yes


http://wiki.qemu.org/Features/Snapshots


http://openvz.livejournal.com/44508.html



The cloning with a modified vzmigrate script helps.

- Nipun

On Sun, Nov 23, 2014 at 5:29 PM, Kir Kolyshkin  wrote:


On 11/23/2014 04:59 AM, Nipun Arora wrote:

Hi Kir,

Thanks for the response, I'll update it, and tell you about the
results.

1. A follow up question... I found that the write I/O speed of
500-1Mbps increased the suspend time  to several minutes.(mostly pcopy
stage)
This seems extremely high for a relatively low I/O workload, which is
why I was wondering if there are any special things I need to take care of.
(I ran fio (flexible i/o writer) with fixed throughput while doing live
migration)


Please retry with vzctl 4.8 and ploop 1.12.1 (make sure they are on
both sides).
There was a 5 second wait for the remote side to finish syncing
copied ploop data. It helped a case with not much I/O activity in
container, but
ruined the case you are talking about.

Newer ploop and vzctl implement a feedback channel for ploop copy that
eliminates
that wait time.

http://git.openvz.org/?p=ploop;a=commit;h=20d754c91079165b
http://git.openvz.org/?p=vzctl;a=commit;h=374b759dec45255d4

There are some other major improvements as well, such as async send for
ploop.

http://git.openvz.org/?p=ploop;a=commit;h=a55e26e9606e0b


2. For my purposes, I have modified the live migration script to allow
me to do cloning... i.e. I start both the containers instead of deleting the
original. I need to do this "cloning" from time to time for the same target

Re: [Users] Live Migration Optimal execution

2014-11-28 Thread Kir Kolyshkin


On 11/27/2014 04:14 PM, Nipun Arora wrote:

Thanks, the speed is improved by an order of magnitude :)

btw. is there any benchmark, that you all have looked into for testing 
how good/practical live migration is for real-world systems?
Additionally, I'm trying to run a java application(dacapo benchmark), 
but keep having trouble in getting java to run..


java -version

Error occurred during initialization of VM

Could not reserve enough space for object heap

Could not create the Java virtual machine.


I've put my vz conf file below, can anyone suggest what could be the 
problem?


Your config is not fully converted to VSwap. You need to remove all 
beancounters except ram&swap (PHYSPAGES and SWAPPAGES).



Thanks
Nipun

# UBC parameters (in form of barrier:limit)

KMEMSIZE="14372700:14790164"

LOCKEDPAGES="2048:2048"

PRIVVMPAGES="65536:69632"

SHMPAGES="21504:21504"

NUMPROC="240:240"

PHYSPAGES="0:131072"

VMGUARPAGES="33792:unlimited"

OOMGUARPAGES="26112:unlimited"

NUMTCPSOCK="360:360"

NUMFLOCK="188:206"

NUMPTY="16:16"

NUMSIGINFO="256:256"

TCPSNDBUF="1720320:2703360"

TCPRCVBUF="1720320:2703360"

OTHERSOCKBUF="1126080:2097152"

DGRAMRCVBUF="262144:262144"

NUMOTHERSOCK="1200"

DCACHESIZE="3409920:3624960"

NUMFILE="9312:9312"

AVNUMPROC="180:180"

NUMIPTENT="128:128"


# Disk quota parameters (in form of softlimit:hardlimit)

DISKSPACE="3145728:3145728"

DISKINODES="131072:144179"

QUOTATIME="0"


# CPU fair scheduler parameter

CPUUNITS="1000"


NETFILTER="stateless"

VE_ROOT="/vz/root/101"

VE_PRIVATE="/vz/private/101"

OSTEMPLATE="centos-6-x86_64"

ORIGIN_SAMPLE="basic"

HOSTNAME="test"

IP_ADDRESS="192.168.1.101"

NAMESERVER="8.8.8.8 8.8.4.4"

CPULIMIT="25"

SWAPPAGES="0:262144"



On Mon, Nov 24, 2014 at 12:16 PM, Kir Kolyshkin <mailto:k...@openvz.org>> wrote:



On 11/23/2014 07:13 PM, Nipun Arora wrote:

Thanks, I will try your suggestions, and get back to you.
btw... any idea what could be used to share the base image on
both containers?
Like hardlink it in what way? Once both containers start, won't
they have to write to different locations?


ploop is composed as a set of stacked images, with all of them but
the top one being read-only.



I understand that some file systems have a copy on write
mechanism, where after a snapshot all future writes are written
to a additional linked disks.
Does ploop operate in a similar way?


yes



http://wiki.qemu.org/Features/Snapshots


http://openvz.livejournal.com/44508.html




The cloning with a modified vzmigrate script helps.

- Nipun

On Sun, Nov 23, 2014 at 5:29 PM, Kir Kolyshkin mailto:k...@openvz.org>> wrote:


On 11/23/2014 04:59 AM, Nipun Arora wrote:

Hi Kir,

Thanks for the response, I'll update it, and tell you about
the results.

1. A follow up question... I found that the write I/O speed
of 500-1Mbps increased the suspend time  to several
minutes.(mostly pcopy stage)
This seems extremely high for a relatively low I/O workload,
which is why I was wondering if there are any special things
I need to take care of.
(I ran fio (flexible i/o writer) with fixed throughput while
doing live migration)


Please retry with vzctl 4.8 and ploop 1.12.1 (make sure they
are on both sides).
There was a 5 second wait for the remote side to finish syncing
copied ploop data. It helped a case with not much I/O
activity in container, but
ruined the case you are talking about.

Newer ploop and vzctl implement a feedback channel for ploop
copy that eliminates
that wait time.

http://git.openvz.org/?p=ploop;a=commit;h=20d754c91079165b
http://git.openvz.org/?p=vzctl;a=commit;h=374b759dec45255d4

There are some other major improvements as well, such as
async send for ploop.

http://git.openvz.org/?p=ploop;a=commit;h=a55e26e9606e0b


2. For my purposes, I have modified the live migration
script to allow me to do cloning... i.e. I start both the
containers instead of deleting the original. I need to do
this "cloning" from time to time for the same target
container...

 a. Which means that lets say we cloned container C1 to
container C2, and let both execute at time t0, this works
with no apparent loss of service.
  b. Now at time t1 I would like to again clone C1 to
C2, and would like to optimize t

Re: [Users] Live Migration Optimal execution

2014-11-24 Thread Kir Kolyshkin


On 11/23/2014 07:13 PM, Nipun Arora wrote:

Thanks, I will try your suggestions, and get back to you.
btw... any idea what could be used to share the base image on both 
containers?
Like hardlink it in what way? Once both containers start, won't they 
have to write to different locations?


ploop is composed as a set of stacked images, with all of them but the 
top one being read-only.




I understand that some file systems have a copy on write mechanism, 
where after a snapshot all future writes are written to a additional 
linked disks.

Does ploop operate in a similar way?


yes



http://wiki.qemu.org/Features/Snapshots


http://openvz.livejournal.com/44508.html



The cloning with a modified vzmigrate script helps.

- Nipun

On Sun, Nov 23, 2014 at 5:29 PM, Kir Kolyshkin <mailto:k...@openvz.org>> wrote:



On 11/23/2014 04:59 AM, Nipun Arora wrote:

Hi Kir,

Thanks for the response, I'll update it, and tell you about the
results.

1. A follow up question... I found that the write I/O speed of
500-1Mbps increased the suspend time  to several minutes.(mostly
pcopy stage)
This seems extremely high for a relatively low I/O workload,
which is why I was wondering if there are any special things I
need to take care of.
(I ran fio (flexible i/o writer) with fixed throughput while
doing live migration)


Please retry with vzctl 4.8 and ploop 1.12.1 (make sure they are
on both sides).
There was a 5 second wait for the remote side to finish syncing
copied ploop data. It helped a case with not much I/O activity in
container, but
ruined the case you are talking about.

Newer ploop and vzctl implement a feedback channel for ploop copy
that eliminates
that wait time.

http://git.openvz.org/?p=ploop;a=commit;h=20d754c91079165b
http://git.openvz.org/?p=vzctl;a=commit;h=374b759dec45255d4

There are some other major improvements as well, such as async
send for ploop.

http://git.openvz.org/?p=ploop;a=commit;h=a55e26e9606e0b


2. For my purposes, I have modified the live migration script to
allow me to do cloning... i.e. I start both the containers
instead of deleting the original. I need to do this "cloning"
from time to time for the same target container...

 a. Which means that lets say we cloned container C1 to
container C2, and let both execute at time t0, this works with no
apparent loss of service.
  b. Now at time t1 I would like to again clone C1 to C2, and
would like to optimize the rsync process as most of the ploop
file for C1 and C2 should still be the same (i.e. less time to
sync). Can anyone suggest what would be the best way to realize
the second point?


You can create a ploop snapshot and use shared base image for both
containers
(instead of copying the base delta, hardlink it). This is not
supported by tools
(for example, since base delta is now shared you can't merge down
to it, but the
tools are not aware) so you need to figure it out by yourself and
be accurate
but it should work.





Thanks
Nipun

On Sun, Nov 23, 2014 at 12:56 AM, Kir Kolyshkin mailto:k...@openvz.org>> wrote:


On 11/22/2014 09:09 AM, Nipun Arora wrote:

Hi All,

I was wondering if anyone can suggest what is the most
optimal way to do the following

1. Can anyone clarify if ploop is the best layout for
minimum suspend time during live migration?


Yes (due to ploop copy which only copies the modified blocks).



2. I tried migrating a ploop device where I increased the
--diskspace to 5G,
and found that the suspend time taken by live migration
increased to 57 seconds
(mainly undump and restore increased)...
whereas a 2G diskspace was taking 2-3 seconds suspend
time... Is this expected?



No. Undump and restore times depends mostly on amount of RAM
used by a container.

Having said that, live migration stages influence each other,
although it's less so
in the latest vzctl release (I won't go into details here if
you allow me -- just make sure
you test with vzctl 4.8).



3. I tried running a write intensive workload, and found
that beyond 100-150Kbps,
the suspend time during live migration rapidly increased? Is
this an expected trend?


Sure. With increased writing speed, the amount of data that
needs to be copied after CT
is suspended increases.



I am using vzctl 4.7, and ploop 1.11 in centos 6.5


You need to update vzctl and ploop and rerun your tests,
there should be
some improvement (in particular with respect to issue #3).



Thanks
Nipun


___
 

Re: [Users] Live Migration Optimal execution

2014-11-23 Thread Kir Kolyshkin


On 11/23/2014 04:59 AM, Nipun Arora wrote:

Hi Kir,

Thanks for the response, I'll update it, and tell you about the results.

1. A follow up question... I found that the write I/O speed of 
500-1Mbps increased the suspend time  to several minutes.(mostly pcopy 
stage)
This seems extremely high for a relatively low I/O workload, which is 
why I was wondering if there are any special things I need to take 
care of.
(I ran fio (flexible i/o writer) with fixed throughput while doing 
live migration)


Please retry with vzctl 4.8 and ploop 1.12.1 (make sure they are on both 
sides).

There was a 5 second wait for the remote side to finish syncing
copied ploop data. It helped a case with not much I/O activity in 
container, but

ruined the case you are talking about.

Newer ploop and vzctl implement a feedback channel for ploop copy that 
eliminates

that wait time.

http://git.openvz.org/?p=ploop;a=commit;h=20d754c91079165b
http://git.openvz.org/?p=vzctl;a=commit;h=374b759dec45255d4

There are some other major improvements as well, such as async send for 
ploop.


http://git.openvz.org/?p=ploop;a=commit;h=a55e26e9606e0b


2. For my purposes, I have modified the live migration script to allow 
me to do cloning... i.e. I start both the containers instead of 
deleting the original. I need to do this "cloning" from time to time 
for the same target container...


 a. Which means that lets say we cloned container C1 to container C2, 
and let both execute at time t0, this works with no apparent loss of 
service.
b. Now at time t1 I would like to again clone C1 to C2, and would like 
to optimize the rsync process as most of the ploop file for C1 and C2 
should still be the same (i.e. less time to sync). Can anyone suggest 
what would be the best way to realize the second point?


You can create a ploop snapshot and use shared base image for both 
containers
(instead of copying the base delta, hardlink it). This is not supported 
by tools
(for example, since base delta is now shared you can't merge down to it, 
but the
tools are not aware) so you need to figure it out by yourself and be 
accurate

but it should work.




Thanks
Nipun

On Sun, Nov 23, 2014 at 12:56 AM, Kir Kolyshkin <mailto:k...@openvz.org>> wrote:



On 11/22/2014 09:09 AM, Nipun Arora wrote:

Hi All,

I was wondering if anyone can suggest what is the most optimal
way to do the following

1. Can anyone clarify if ploop is the best layout for minimum
suspend time during live migration?


Yes (due to ploop copy which only copies the modified blocks).



2. I tried migrating a ploop device where I increased the
--diskspace to 5G,
and found that the suspend time taken by live migration increased
to 57 seconds
(mainly undump and restore increased)...
whereas a 2G diskspace was taking 2-3 seconds suspend time... Is
this expected?



No. Undump and restore times depends mostly on amount of RAM used
by a container.

Having said that, live migration stages influence each other,
although it's less so
in the latest vzctl release (I won't go into details here if you
allow me -- just make sure
you test with vzctl 4.8).



3. I tried running a write intensive workload, and found that
beyond 100-150Kbps,
the suspend time during live migration rapidly increased? Is this
an expected trend?


Sure. With increased writing speed, the amount of data that needs
to be copied after CT
is suspended increases.



I am using vzctl 4.7, and ploop 1.11 in centos 6.5


You need to update vzctl and ploop and rerun your tests, there
should be
some improvement (in particular with respect to issue #3).



Thanks
Nipun


___
Users mailing list
Users@openvz.org  <mailto:Users@openvz.org>
https://lists.openvz.org/mailman/listinfo/users



___
Users mailing list
Users@openvz.org <mailto:Users@openvz.org>
https://lists.openvz.org/mailman/listinfo/users




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Live Migration Optimal execution

2014-11-22 Thread Kir Kolyshkin


On 11/22/2014 09:09 AM, Nipun Arora wrote:

Hi All,

I was wondering if anyone can suggest what is the most optimal way to 
do the following


1. Can anyone clarify if ploop is the best layout for minimum suspend 
time during live migration?


Yes (due to ploop copy which only copies the modified blocks).



2. I tried migrating a ploop device where I increased the --diskspace 
to 5G,
and found that the suspend time taken by live migration increased to 
57 seconds

(mainly undump and restore increased)...
whereas a 2G diskspace was taking 2-3 seconds suspend time... Is this 
expected?




No. Undump and restore times depends mostly on amount of RAM used by a 
container.


Having said that, live migration stages influence each other, although 
it's less so
in the latest vzctl release (I won't go into details here if you allow 
me -- just make sure

you test with vzctl 4.8).


3. I tried running a write intensive workload, and found that beyond 
100-150Kbps,
the suspend time during live migration rapidly increased? Is this an 
expected trend?


Sure. With increased writing speed, the amount of data that needs to be 
copied after CT

is suspended increases.



I am using vzctl 4.7, and ploop 1.11 in centos 6.5


You need to update vzctl and ploop and rerun your tests, there should be
some improvement (in particular with respect to issue #3).



Thanks
Nipun


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] Shortest guide about running OpenVZ containers on top of ZFS

2014-11-13 Thread Kir Kolyshkin


On 11/13/2014 12:52 PM, Pavel Odintsov wrote:

Hello!

Pavel! Awesome!

Please add one killer feature about ZFS - compete support for SSD with
TRIM and not-killing-this-sector-by-thousands-writes :)


Hmm, aren't all SSD drives have built-in wear leveling?



On Thu, Nov 13, 2014 at 11:17 PM, Pavel Snajdr  wrote:

Well, innovation isn't about matching features that someone else has
just for the sake of having them too, is it? :)

What lot of people are missing about ZFS is that it is a self-contained
project trying to solve real storage problems. It's not trying to be a
filesystem. It is a complete storage solution. It's like you'd take a
NetApp and plugged it into your kernel directly.

Example features of ZFS that BTRFS will never have due to this fact:

- ARC - a smart caching mechanism, with which we're able to deliver
almost 100% hitrate almost 100% of the time - something that isn't even
remotely imaginable with linux LRU dcache

- L2ARC - second level ARC enabling you to cache less used ARC entries
onto a SSD

- dedicated device for ZIL - synchronous writes are a real pain with any
CoW filesystem, ZFS solves this so elegantly that it even beats Ext4
there - you can have your sync writes sent to a fast SSD or a NVRAM
device. Thanks to this I don't even notice heavy MySQL instances on our
machines, until we run out of CPU power. The disks aren't the
show-stopper anymore

- NFS, SMB, iSCSI integration - people judging ZFS from the traditional
linux kernel perspective don't get why should it be a filesystem's job
to do these - like I said, ZFS is not trying to be a filesystem. It's
trying to be the last storage solution you'll ever need on your server.

Etc. I could go on about this a while :)

Regarding the licensing issues. Well. ZFS isn't trying to be the best
filesystem for Linux. As far as I can remember, nobody from the ZFS
world has ever had any ambitions to get it into Linux. Even if the
license would allow merging ZFS in the kernel, the reality of ZFS design
as a self-contained solution wouldn't probably be accepted as well in
the Linux community as it was in FreeBSD. They're two completely
different cultures.

As I've already mentioned FreeBSD, here comes the next advantage ZFS has
over anything that Linux provides - BTRFS included: you can take your
pool today and open it on another platform, be it Linux, FreeBSD or
Illumos. Work is being done to ensure these implementations don't drift
apart and stay compatible - it is know as the OpenZFS project.

Regarding the patent issues, the same essentially goes for BTRFS, the
main concern wouldn't be Oracle here, it would be NetApp. They hold
enough patents to kick anyone around, doesn't matter if it's OpenZFS or
BTRFS. Only with Oracle they seem to be okay, because of Sun's previous
dealings with NetApp in this regard.

However, these patent concerns doesn't seem to hold any ground, as you
can see Nexenta and Joyent both making reasonable enough success to
tickle NetApp's nerves (especially Nexenta since they're a direct
competitor of NetApp in some market segments), yet nobody is suing them.
This is even less of an issue for me as everything I operate is in
Europe, where software patents don't apply.

Now it looks like I'm actually the one who's on holy crusade for ZFS here :)

But it all started off with me being really, really disappointed with
Ext4 performance. With ZFS I can get 80-120 heavy LAMP stack containers
on a single node, whereas the disks would be long dead with Ext4 before
I would even reach a half of that.

I've played around with having separate ext4 partitions for every CT (it
was before ploop), I've tried Facebook Flashcache to catch the sync
writes and offload them onto a SSD, I think I have tried everything I
could and not much has really changed since then on the Linux storage
scene (2012).

There's BTRFS, which to a layman looks like it's almost in feature
parity with ZFS, but if you actually try to use it, then "WAT" comes out
of your mouth like every second word. It still needs a lot of work.

Whereas ZFS as been in the production for more than 8 years now. And it
still is in active development with lots of companies and individuals
trying to make it better - they have much better starting point as far
as the innovations go, because they act on their own playground and they
don't have to deal with politics of such a huge project like Linux
kernel is.

/snajpa

On 11/13/2014 06:24 PM, Scott Dowdle wrote:

Not really.  That isn't to say that btrfs is done or that all of its features, 
especially those added much later in the development cycle, are stable.  So, I 
don't contend that btrfs is a suitable contender to zfs at the moment but it 
does have a few benefits that will eventually put it over the top... past zfs 
on Linux.  What are they?

1) It's in the mainline kernel and will be available in all distros with 
sufficiently new enough kernels.

That's it.  That's all it needs.  Being part of mainline Linux means that i

Re: [Users] convert to ploop

2014-11-10 Thread Kir Kolyshkin


On 11/10/2014 05:51 PM, Nick Knutov wrote:

Thanks, this is working.

For wiki - I did

find /etc/vz/conf/*.conf -type f -exec sed -i.bak
"s/DISKINODES/#DISKINODES/g" {} \;

VE=123 ; vzctl stop $VE ; vzctl convert $VE ; vzctl start $VE


By the way I just found it's enough to set DISKINODES to 0
(after stopping a container of course).



Unfortunatly, I do not understand how to work with/edit MediaWiki and
I'm not sure on what page it's better to add this.


I have modified this:

https://openvz.org/Ploop/Getting_started#Creating_a_new_CT,

and also wrote a whole new

https://openvz.org/Ploop/diskinodes

Hope it's enough information now.



25.10.2014 7:32, Kirill Kolyshkin пишет:

On Oct 24, 2014 5:33 PM, "Devon B." mailto:devo...@virtualcomplete.com>> wrote:

I think what Kir was getting at was set the diskinodes equal to 65536

x GiB before converting.  So for 40GiB, set diskinodes to 2621440

Either that, or just remove the DISKINODES from CT config


On 10/24/2014 8:05 PM, Nick Knutov wrote:

Thanks, now I understand why this occurred, but what is the easiest way
to convert a lot of different CTs to ploop? As I remember there is no
way to set up unlimited diskinodes or disable them (in case I want to
use CT size when converting to ploop and don't want to think about
inodes at all).


25.10.2014 5:31, Kir Kolyshkin пишет:

[...]
Previously, we didn't support setting diskinodes for ploop, but later we
found
a way to implement it (NOTE: for vzctl create and vzctl convert only).
The trick we use it we create a file system big enough to accomodate the
requested number of inodes, and then use ploop resize (in this case
downsize)
to bring it down to requested amount.

In this case, 1G inodes requirements leads to creation of 16TB

filesystem

(remember, 1 inode per 16K). Unfortunately, such huge FS can't be

downsized

to as low as 40G, the minimum seems to be around 240G (values printed in
the error message are in sectors which are 512 bytes each).

Solution: please be reasonable when requesting diskinodes for ploop.



___
Users mailing list
Users@openvz.org <mailto:Users@openvz.org>
https://lists.openvz.org/mailman/listinfo/users



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users



___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] can't change ploop filesystem size

2014-10-28 Thread Kir Kolyshkin


On 10/28/2014 09:55 AM, Roman Haefeli wrote:

Hi all

I tried to increase ploop filesystem size of a container and it failed:

$ vzctl set db2new --diskspace 90G --save
Error in get_balloon (balloon.c:111): Can't ioctl mount
point /iscsi/root/991: No such file or directory
Failed to resize image: Error in get_balloon (balloon.c:111): Can't
ioctl mount point /iscsi/root/991: No such file or directory [3]
Error: failed to apply some parameters, not saving configuration file!

It actually works for most of our containers, but not this one. Special
about this one is that I copied its root directory from another hostnode
manually (with rsync, not with vzmigrate). Maybe I forgot something in
the configuration, but I can't figure out what.

vzctl 4.7.2
ploop 1.12
kernel 2.6.32-openvz-042stab093.5-amd64


Basically what it says is ploop is mounted, but its mount point
(/iscsi/root/991) is inaccessible. Maybe you just need to create it?

___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] convert to ploop

2014-10-24 Thread Kir Kolyshkin


On 10/24/2014 02:50 PM, Nick Knutov wrote:

Hello all,

trying to convert CT from simfs to ploop. vzctl convert ended with error

Error in ploop_resize_image (ploop.c:2477): Unable to change image size
to 83877888 sectors, minimal size is 502423144
Unmounting file system at /vz/private/101.ploop/root.hdd/root.hdd.mnt
Unmounting device /dev/ploop37776
Failed to resize image: Error in ploop_resize_image (ploop.c:2477):
Unable to change image size to 83877888 sectors, minimal size is
502423144 [38]


Why this happened? What I'm doing wrong?

If it does matter - CT was:
vzctl set ${ve} --diskspace  40G --quotatime 600 --save


[Just a side note -- quotatime doesn't make sense for ploop,
as it's only related to vzquota. Harmless but totally useless.]


vzctl set ${ve} --diskinodes 10:10 --save


 This ^^

While with simfs option --diskinodes is just a limit and can be set 
arbitrarily high,
with ploop it affects the actual file system being created. By default, 
ext4 allocates
1 (one) inode per each 16 KB of data (this is practically the same as to 
assume that

the average file size will be 16KB).

Now, let's do some simple math. 40GB of disk space means 2621440 inodes
(40 * 1024 * 1024 / 16). Apparently you think it won't be enough for you and
set diskinodes to 1,000,000,000, i.e 1G (which by the way means you assume
the average file size inside containers will be 40 bytes).

Previously, we didn't support setting diskinodes for ploop, but later we 
found

a way to implement it (NOTE: for vzctl create and vzctl convert only).
The trick we use it we create a file system big enough to accomodate the
requested number of inodes, and then use ploop resize (in this case 
downsize)

to bring it down to requested amount.

In this case, 1G inodes requirements leads to creation of 16TB filesystem
(remember, 1 inode per 16K). Unfortunately, such huge FS can't be downsized
to as low as 40G, the minimum seems to be around 240G (values printed in
the error message are in sectors which are 512 bytes each).

Solution: please be reasonable when requesting diskinodes for ploop.

PS I would be much obliged if you can put this on OpenVZ wiki


and there are about 20Gb of files inside this CT.

2.6.32-042stab093.5 and latest ploop.




___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


Re: [Users] mount ploop image from read-only fs

2014-10-24 Thread Kir Kolyshkin

Roman,

Sorry for hijacking the thread, but back to the original problem.
Can you tell why vzctl snapshot-mount (or ploop snapshot-mount)
is/was not working for you? Ideally, please provide a detailed scenario.

Kir.


On 10/24/2014 05:25 AM, Roman Haefeli wrote:

On Mon, 2014-09-15 at 14:49 +0400, Pavel Odintsov wrote:

Hello!

I found bug! Thx Maxim Patlasov for helping with ploop v1 BAT format.

Please check version from git and it support ploop v1 and v2 correctly :)

It seems, it's not yet working properly for me. I can mount the ploop
image, I can mount its filesystem, I can browse the folder structure and
everything seems fine, but when I try to read any text file from etc/ or
var/log/ I only see garbage or content that certainly belongs to a
different file. Something with alignment seems still not correct.

Please tell me how I can give you more useful information.

Roman


  


On Sun, Sep 14, 2014 at 2:15 AM, Pavel Odintsov
 wrote:

Thank you for report, its very useful for investigation. But only one
difference between v1 and v2 is ploop disk size in header (32 vs 64 bit).
But I use 64 bit numbers anywhere and everything should work fine. But I
suppose alignment issues which not handled in my tool.


On Friday, September 12, 2014, Roman Haefeli  wrote:

On Fri, 2014-09-12 at 11:15 +0200, Roman Haefeli wrote:

On Fri, 2014-09-12 at 10:56 +0200, Roman Haefeli wrote:

Hi Pavel

I might have some more information on the issue. It seems that only
'old' ploop images cannot be mounted by ploop_userspace. I actually
don't quite know the ploop version I used for creating the 'old' ploop
images,  but I know it works well with images created with ploop v1.6.

Does ploop_userspace know about older image formats?

No, it's also not the version.

Yes, there are different versions... I must have checked on the wrong
machine. ploop_userspace works well with images created by ploop v1.11,
but not with images created by ploop v1.6.

Sorry for the noise.

Roman




On Thu, 2014-08-28 at 22:53 +0400, Pavel Odintsov wrote:

Hello!

No, it's not depend on kernel version. I created issue for you and
will try to investigate:
https://github.com/FastVPSEestiOu/ploop_userspace/issues/10 please
track this github issue.

On Thu, Aug 28, 2014 at 6:12 PM, Roman Haefeli 
wrote:

Some more info:

It works on our test cluster where we have
2.6.32-openvz-042stab093.4-amd64 installed. The report from below
is
from a host node running 2.6.32-042stab081.3-amd64.

Is ploop_userspace dependent on kernel version?

Roman


On Thu, 2014-08-28 at 15:59 +0200, Roman Haefeli wrote:

Hi Pavel

Your tool comes in handy. That is exactly what we'd need.
However, I had
troubles using it. I did:

$ ploop_userspace
/virtual/.snapshot/nightly.0/vz/private/2006/root.hdd/root.hdd

   We process:
/virtual/.snapshot/nightly.0/vz/private/2006/root.hdd/root.hdd
   Ploop file size is: 4193255424
   version: 1 disk type: 2 heads count: 16 cylinder count: 81920
sector count: 2048 size in tracks: 20480 size in sectors: 41943040 disk in
use: 1953459801 first block offset: 2048 flags: 0
   For storing 21474836480 bytes on disk we need 20480 ploop
blocks
   We have 1 BAT blocks
   We have 262128 slots in 1 map
   Number of non zero blocks in map: 3998
   Please be careful because this disk used now! If you need
consistent backup please stop VE
   !!!ERROR!!! We can't found GPT table on this disk
   !!!ERROR!!! We can't find ext4 signature
   Set device /dev/nbd0 as read only
   Try to found partitions on ploop device
   First ploop partition was not detected properly, please call
partx/partprobe manually
   You could mount ploop filesystem with command: mount -r -o
noload /dev/nbd0p1 /mnt


Despite the errors, I tried to mount the ploop-partition:

$ mount -r  -o noload /dev/nbd0p1 /mnt/

and got:

   mount: special device /dev/nbd0p1 does not exist

Apparently, ploop_userspace wasn't able to read the GPT partition
table.

Tell me, if you need further information.

Thanks,
Roman








On Tue, 2014-08-19 at 12:48 +0400, Pavel Odintsov wrote:

Hello!

You can mount ploop from RO root.hdd images with my tool:
https://github.com/FastVPSEestiOu/ploop_userspace but it's not
stable
now. You can try it and provide feedback.

On Tue, Aug 19, 2014 at 12:24 PM, Roman Haefeli
 wrote:

Hi all

At the university I work, we plan to switch all containers
from simfs to
ploop images on the long run. Despite the many advantages of
using
ploop, there is one major drawback that keeps us from
switching
production already now: We can't mount ploop images from
read-only
snapshots. In case of a recovery of a single file, we have to
copy the
ploop image from the read-only snapshot to some read-write
storage in
order to be able to mount it and extract the file. For CTs
with huge
ploop-images this is a big hurdle.

Wouldn't it be possible to add a some flag to the 'ploop'
utility to
allow mounting ploop images from read-only storage (by
bypassing some
checks or skipping to set the dirt

Re: [Users] FPU state size unsupported

2014-10-24 Thread Kir Kolyshkin
These two CPUs are not compatible, you can't migrate between them. One is
older, other is newer (supports XSAVE).
On Oct 24, 2014 5:50 AM, "Dmitrijs Jerihovs"  wrote:

> Hello,
>
> I receive error using live migration *FPU state size unsupported: 832
> (current: 512).*  Someone have solution how to bypass it or I need to
> change CPU ?
>
> Command: vzmigrate -v -r no --live --nodeps=cpu,ipv6  10.10.1.1 8000
> Migration From CPU: i7-4770 To i7 CPU 920
> Kernel: 2.6.32-042stab094.6
> Vzctl: 4.8
>
> ___
> Users mailing list
> Users@openvz.org
> https://lists.openvz.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@openvz.org
https://lists.openvz.org/mailman/listinfo/users


  1   2   3   4   5   6   >