The following pull request was submitted through Github.
It can be accessed and reviewed at: https://github.com/lxc/linuxcontainers.org/pull/374

This e-mail was sent by the LXC bot, direct replies will not reach the author
unless they happen to be subscribed to this list.

=== Description (from pull-request) ===

From 6b2811800e94930943ae0f581cf3c106375606c3 Mon Sep 17 00:00:00 2001
From: KATOH Yasufumi <ka...@jazz.email.ne.jp>
Date: Wed, 17 Jul 2019 19:37:27 +0900
Subject: [PATCH 1/2] Add Japanese release announcement of LXD 3.15

Signed-off-by: KATOH Yasufumi <ka...@jazz.email.ne.jp>
---
 content/lxd/news.ja/lxd-3.15.yaml | 1146 +++++++++++++++++++++++++++++
 1 file changed, 1146 insertions(+)
 create mode 100644 content/lxd/news.ja/lxd-3.15.yaml

diff --git a/content/lxd/news.ja/lxd-3.15.yaml 
b/content/lxd/news.ja/lxd-3.15.yaml
new file mode 100644
index 0000000..4ed8ca9
--- /dev/null
+++ b/content/lxd/news.ja/lxd-3.15.yaml
@@ -0,0 +1,1146 @@
+title: LXD 3.15 リリースのお知らせ
+date: 2019/07/11 07:07
+origin: https://discuss.linuxcontainers.org/t/lxd-3-15-has-been-released/5218
+content: |-
+  ### はじめに <!-- Introduction -->
+  <!--
+  The LXD team is very excited to announce the release of LXD 3.15!
+  -->
+  LXD チームは、LXD 3.15 のリリースをお知らせすることにとてもワクワクしています!
+
+  <!--
+  This release both includes a number of major new features as well as some 
significant internal rework of various parts of LXD.
+  -->
+  このリリースには、たくさんの重要な新機能と、LXD の色々な部分に渡る重要な内部的な実装の変更が含まれています。
+
+  <!--
+  One big highlight is the transition to the dqlite 1.0 branch which will 
bring us more performance and reliability, both for our cluster users and for 
standalone installations. This rework moves a lot of the low-level 
database/replication logic to dedicated C libraries and significantly reduces 
the amount of back and forth going on between C and Go.
+  -->
+  大きなハイライトのひとつは、dqlite 1.0 
ブランチへの移行で、クラスターとスタンドアローンユーザーの両方に、パフォーマンスと信頼性の向上をもたらすでしょう。この変更は、低レベルのデーターベース・レプリケーションロジックの多くが専用の
 C ライブラリに移動し、C と Go の間で行われるやりとりの量が大幅に削減されます。
+
+  <!--
+  On the networking front, this release features a lot of improvements, adding 
support for IPv4/IPv6 filtering on bridges, MAC and VLAN filtering on SR-IOV 
devices and much improved DHCP server management.
+  -->
+  ネットワーク面では、このリリースでは、ブリッジでの IPv4/IPv6 フィルタリングサポートの追加、SR-IOV デバイスでの MAC と VLAN 
フィルタリング、DHCP サーバ管理での大きな改善といった大きな改善がなされています。
+
+  <!--
+  We're also debuting a new version of our resources API which will now 
provide details on network devices and storage disks on top of extending our 
existing CPU, memory and GPU reporting.
+  -->
+  また、既存の CPU、メモリ、GPU のレポート機能に加えて、ネットワークデバイスやストレージディスクの詳細を提供するリソース API 
の新バージョンを公開します。
+
+  <!--
+  And that's all before looking into the many other performance improvements, 
smaller features and bugfixes that went into this release.
+  -->
+  このリリースで行われた多数のパフォーマンスの改良、小さな機能追加、バグフィックス以外ではこれですべてです。
+
+  <!--
+  For our Windows users, this is also the first LXD release to be available 
through the [Chocolatey](https://chocolatey.org) package manager: `choco 
install lxc`
+  -->
+  Windows ユーザにとっては、[Chocolatey](https://chocolatey.org) パッケージマネージャー経由で入手できる最初の 
LXD リリースでもあります: `choco install lxc`
+
+  Enjoy!
+
+  ### 主要な改良点 <!-- Major improvements -->
+  #### dqlite 1.0 への変更 <!-- Switch to dqlite 1.0 -->
+  <!--
+  After over a year of running all LXD servers on the original implementation 
of our distributed sqlite database, it's finally time for LXD to switch to its 
1.0 branch.
+  -->
+  分散 SQLite データーベースでの元の実装ですべての LXD サーバが稼働して 1 年以上経ちました。ついに LXD はその 1.0 
ブランチに移行します。
+
+  <!--
+  This doesn't come with any immediately noticeable improvements for the user, 
but reduces the number of external dependencies, CPU usage and memory usage for 
the database. It will also make it significantly easier for us to debug issues 
and better integrate with more complex database operations when running 
clusters.
+  -->
+  これはユーザーにとってすぐに目に見える改善ではありませんが、外部依存性の数、データーベースの CPU 
とメモリ使用量を減少させます。また、クラスター実行時の問題のデバッグ、より複雑なデーターベース操作との統合も大幅に容易になります。
+
+  <!--
+  Upon upgrading to LXD 3.15, the on-disk database format will change, getting 
automatically converted following an automated backup. For cluster users, the 
protocol used for database queries between cluster nodes is also changing, 
which will cause all cluster nodes to refresh at the same time so they all 
transition to the new database.
+  -->
+  LXD 3.15 
にアップグレードすると、ディスク上のデーターベースは自動バックアップの後に自動的に変換され、フォーマットが変更されます。クラスターユーザは、クラスターノード間のデーターベースクエリーに使われるプロトコルも変わります。このため、クラスターノードすべてが同時に更新され、新しいデーターベースに移行します。
+
+  #### DHCP リース処理の変更 <!-- Reworked DHCP lease handling -->
+  <!--
+  In the past, LXD's handling of DHCP was pretty limited. We would write 
static lease entries to the configuration and then kick dnsmasq to read it. For 
changes and deletions of static leases, we'd need to completely restart the 
dnsmasq process which was rather costly.
+  -->
+  これまで、LXD の DHCP 処理は非常に限定的でした。静的なリースのエントリを設定に書き、dnsmasq 
が実行されてそれを読み取ります。静的なリースを変更と削除を行うためには、かなりコストのかかる dnsmasq プロセスの再起動が必要です。
+
+  <!--
+  LXD 3.15 changes that by instead having LXD itself issue DHCP requests to 
the dnsmasq server based on what's currently in the DHCP lease table. This can 
be used to manually release a lease when a container's configuration is altered 
or a container is deleted, all without ever needing to restart dnsmasq.
+  -->
+  LXD 3.15 ではこの代わりに、現在の DHCP リーステーブルにの内容に基づいて、LXD 自身が DHCP リクエストを dnsmasq 
サーバーに投げます。これは、dnsmasq 
を再起動する必要なしに、コンテナの設定が変更されたときや、コンテナが削除されたときに、手動でリースを解放するのに使えます。
+
+  #### クラスターのハートビート処理の変更 <!-- Reworked cluster heartbeat handling -->
+  <!--
+  In the past, the cluster leader would send a message to all cluster members 
on a 10s cadence, spreading those heartbeats over time. The heatbeat data 
itself was just the list of database nodes so that all cluster members would 
know where to send database queries.
+  -->
+  これまで、クラスターリーダーは 10 
秒間隔で全クラスターメンバーにメッセージを送り、時間とともにこれらのハートビートを拡散していました。ハートビートデーター自体は単なるデーターベースノードのリストであるため、全クラスターメンバーはデーターベースクエリーの送り先を認識できるようになっていました。
+
+  <!--
+  Separately from that mechanism, we then had background tasks on all cluster 
members which would periodically look for version mismatches between members to 
detect pending updates and another task to detect changes in the list of 
members or in their IP addresses to re-configure clustered DNS.
+  -->
+  
このメカニズムとは別に、全クラスターメンバーがバックグラウンドタスクを持ち、保留中の更新を検出するためにメンバー間のバージョンのミスマッチを定期的に探したり、クラスター化
 DNS の再設定のためにメンバーリストや IP アドレスの変更を検出したりしていました。
+
+  <!--
+  For large size clusters, those repetitive tasks ended up being rather costly 
and also un-needed.
+  -->
+  大きなクラスターでは、これらの繰り返し行うタスクはコストが増大したり、不要なものだったりしました。
+
+  <!--
+  LXD 3.15 now extends this internal heartbeat to include the most recent 
version information from the cluster as well as the status of all cluster 
members, not just the database ones. This means that only the cluster leader 
needs to retrieve that data and all other members will now have a consistent 
view of everything within 10s rather than potentially several minutes (as was 
the case for the update check).
+  -->
+  LXD 3.15 
では、この内部ハートビートを拡張し、データーベースメンバーだけでなく、クラスターからの最新のバージョン情報と、全クラスターメンバーのステータスも含めるようにしました。これは、クラスターリーダーだけがそのデータを取得する必要があり、他の全メンバーは数分以内ではなく、10
 秒以内にすべての一貫したデーターを持つことを意味します(更新チェックの場合のように)。
+
+  #### より良いシステムコールインターセプションフレームワーク <!-- Better syscall interception framework 
-->
+  <!--
+  Quite a bit of work has gone into the syscall interception feature of LXD. 
Currently this covers `mknod` and `mknodat` for systems that run a 5.0+ kernel 
along with a git snapshot of both liblxc and libseccomp.
+  -->
+  LXD のシステムコールインターセプション機能では多くの作業が行われています。現在、liblxc と libseccomp 両方の git 
スナップショットと 5.0 以上のカーネルで実行しているシステムでは、`mknod` と `mknodat` をカバーしています。
+
+  <!--
+  The changes involve a switch of API with liblxc ahead of the LXC 3.2 release 
as well as fixing handling of shiftfs backed containers and cleaning common 
logic to make it easier to intercept additional syscalls in the near future.
+  -->
+  この変更には、LXC 3.2 に先立って liblxc で API を変更するだけでなく、ShiftFS 
上で動くコンテナの処理を修正し、近い将来に追加されるシステムコールをより簡単にインターセプトできるように共通ロジックを整理しました。
+
+  #### より信頼性の高い UNIX ソケットプロキシ <!-- More reliable unix socket proxying -->
+  <!--
+  A hard to track down bug in the `proxy` device code was resolved which will 
now properly handle unix socket forwarding. This was related to end of 
connection detection and forwarding of the disconnection event.
+  -->
+  `proxy` デバイスの追跡困難なバグが修正され、UNIX 
ソケットの転送が適切に処理されるようになりました。これは、接続検出の終了と切断イベントの転送に関係していました。
+
+  <!--
+  Users of the `proxy` device for X11 and/or pulseaudio may in the past have 
noticed windows that won't close on exit or the sudden inability to start new 
software using that unix socket. This has now been resolved and so should make 
the life of those running graphical applications in LXD much easier.
+  -->
+  X11 や pulseaudio に対する `proxy` デバイスのユーザーは、過去に終了時に閉じないウィンドウや、その UNIX 
ソケットを使った新しいソフトウェアが起動できなくなることに気づいたかもしれません。この問題は解決したので、LXD 
でグラフィカルアプリケーションを実行する人たちの作業がずっと楽になるはずです。
+
+  ### 新機能 <!-- New features -->
+  #### SR-IOV 上のハードウェア VLAN, MAC フィルタリング <!-- Hardware VLAN and MAC filtering 
on SR-IOV -->
+  <!--
+  The `security.mac_filtering` and `vlan` properties are now avaiable to 
SR-IOV devices. This directly controls the matching SR-IOV options on the 
virtual function and so will completely prevent any MAC spoofing from the 
container or in the case of VLANs will perform hardware filtering at the VF 
level.
+  -->
+  `security.mac_filtering` と `vlan` プロパティが SR-IOV 
デバイス上で指定できるようになりました。これは、SR-IOV の Virtual Function(VF) 
上の対応するオプションを直接コントロールするため、コンテナからの MAC スプーフィングを完全に防ぎます。VLAN の場合は、VF 
レベルでハードウェアフィルタリングを実行します。
+
+      root@athos:~# lxc init ubuntu:18.04 c1
+      Creating c1
+      root@athos:~# lxc config device add c1 eth0 nic nictype=sriov 
parent=eth0 vlan=1015 security.mac_filtering=true
+      Device eth0 added to c1
+      root@athos:~# lxc start c1
+      root@athos:~# lxc list c1
+      
+------+---------+------+-----------------------------------------------+------------+-----------+
+      | NAME |  STATE  | IPV4 |                     IPV6                      
|    TYPE    | SNAPSHOTS |
+      
+------+---------+------+-----------------------------------------------+------------+-----------+
+      | c1   | RUNNING |      | 2001:470:b0f8:1015:7010:a0ff:feca:e7e1 (eth0) 
| PERSISTENT | 0         |
+      
+------+---------+------+-----------------------------------------------+------------+-----------+
+
+  #### `lxd-p2c` に新たに `storage-size` オプションを追加 <!-- New `storage-size` option 
for `lxd-p2c` -->
+  <!--
+  A new `--storage-size` option has been added which when used together with 
`--storage` allows specifying the desired volume size to use for the container.
+  -->
+  `--storage-size` オプションが追加されました。これは `--storage` 
オプションと一緒に使うと、コンテナが使うボリュームサイズを指定できます。
+
+      root@mosaic:~# ./lxd-p2c 10.166.11.1 p2c / --storage btrfs 
--storage-size 10GB
+      Generating a temporary client certificate. This may take a minute...
+      Certificate fingerprint: 
fd200419b271f1dc2a5591b693cc5774b7f234e1ff8c6b78ad703b6888fe2b69
+      ok (y/n)? y
+      Admin password for https://10.166.11.1:8443: 
+      Container p2c successfully created                
+      
+      stgraber@castiana:~/data/code/go/src/github.com/lxc/lxd (lxc/master)$ 
lxc config show p2c
+      architecture: x86_64
+      config:
+        volatile.apply_template: copy
+        volatile.eth0.hwaddr: 00:16:3e:12:39:c8
+        volatile.idmap.base: "0"
+        volatile.idmap.next: 
'[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
+        volatile.last_state.idmap: '[]'
+      devices:
+        root:
+          path: /
+          pool: btrfs
+          size: 10GB
+          type: disk
+      ephemeral: false
+      profiles:
+      - default
+      stateful: false
+      description: ""
+
+  #### カスタムボリュームに対する Ceph FS ストレージバックエンド <!-- Ceph FS storage backend for 
custom volumes -->
+  <!--
+  Ceph FS was added as a storage driver for LXD. Support is limited to custom 
storage volumes though, containers will not be allowed on Ceph FS and it's 
indeed recommended to use Ceph RBD for them.
+  -->
+  Ceph FS が LXD のストレージドライバとして追加されました。使用はカスタムストレージボリュームに限定されていますので、Ceph FS 
上にコンテナ置くことはできません。コンテナには Ceph RBD を使うことをおすすめします。
+
+  <!--
+  Ceph FS support includes size restrictions (quota) and native snapshot 
supports when the server, server configuration and client kernel support those 
features.
+  -->
+  Ceph FS 
では、サーバー、サーバーの設定、クライアントのカーネルがサポートする場合は、サイズ制限(quota)とネイティブスナップショットサポートが使えます。
+
+  <!--
+  This is a perfect match for users of LXD clustering with Ceph as Ceph FS 
will allow you to attach the same custom volume to multiple containers at the 
same time, even if they're located on different hosts (which isn't the case for 
RBD).
+  -->
+  Ceph FS は、異なるホストにコンテナが配置されていても、同じカスタムボリュームを複数のコンテナに同時にアタッチできますので、Ceph を使っている 
LXD クラスターユーザーには最適です(RBD の場合はそうではありません)。
+
+      stgraber@castiana:~$ lxc storage create test cephfs 
source=persist-cephfs/castiana
+      Storage pool test created
+      stgraber@castiana:~$ lxc storage volume create test my-volume
+      Storage volume my-volume created
+      stgraber@castiana:~$ lxc storage volume attach test my-volume c1 data 
/data
+      
+      stgraber@castiana:~$ lxc exec c1 -- df -h
+      Filesystem                                               Size  Used 
Avail Use% Mounted on
+      /var/lib/lxd/storage-pools/default/containers/c1/rootfs  142G  420M  
141G   1% /
+      none                                                     492K  4.0K  
488K   1% /dev
+      udev                                                     7.7G     0  
7.7G   0% /dev/tty
+      tmpfs                                                    100K     0  
100K   0% /dev/lxd
+      tmpfs                                                    100K     0  
100K   0% /dev/.lxd-mounts
+      tmpfs                                                    7.8G     0  
7.8G   0% /dev/shm
+      tmpfs                                                    7.8G  156K  
7.8G   1% /run
+      tmpfs                                                    5.0M     0  
5.0M   0% /run/lock
+      tmpfs                                                    7.8G     0  
7.8G   0% /sys/fs/cgroup
+      [2001:470:b0f8:1015:5054:ff:fe5e:ea44]:6789:/castiana     47G     0   
47G   0% /data
+
+  #### IPv4, IPv6 フィルタリング(スプーフィング防止)<!-- IPv4 and IPv6 filtering (spoof 
protection) -->
+  <!--
+  One frequently requested feature is to extend our spoofing protection beyond 
just MAC spoofing, doing proper IPv4 and IPv6 filtering too.
+  -->
+  頻繁に要求されていた機能として、MAC スプーフィングに加えて、適切に IPv4 と IPv6 フィルタリングも行うことがあります。
+  
+  <!--
+  This effectively allows multiple containers to share the same underlying 
bridge without having concerns about root in one of those containers being able 
to spoof the address of another, hijacking traffic or causing connectivity 
issues.
+  -->
+  これにより、あるコンテナの root 
が他のアドレスを詐称したり、トラフィックをハイジャックしたり、接続の問題を引き起こすことなく、複数のコンテナが同じブリッジを共有できます。
+
+  <!--
+  To prevent a container from being able to spoof the MAC or IP of any other 
container, you can now set the following properties on the `nic` device:
+  -->
+  他のコンテナの MAC アドレス、IP アドレスを詐称させないように、次のようなプロパティを `nic` デバイスに設定できます:
+
+   - security.mac_filtering=true
+   - security.ipv4_filtering=true
+   - security.ipv6_filtering=true
+
+  <!--
+  **NOTE**: Setting those will prevent any internal bridging/nesting inside 
that container as those rely on multiple MAC addresses being used for a single 
container.
+  -->
+  **注意**: このような設定を行うことは、単一のコンテナに複数の MAC 
アドレスを与えないとできないような、内部的なブリッジング・ネスティングを防ぎます。
+
+      stgraber@castiana:~$ lxc config device add c1 eth0 nic nictype=bridged 
name=eth0 parent=lxdbr0 security.mac_filtering=true 
security.ipv4_filtering=true security.ipv6_filtering=true
+      Device eth0 added to c1
+      stgraber@castiana:~$ lxc start c1
+      stgraber@castiana:~$ lxc list c1
+      
+------+---------+----------------------+----------------------------------------------+------------+-----------+
+      | NAME |  STATE  |         IPV4         |                     IPV6       
              |    TYPE    | SNAPSHOTS |
+      
+------+---------+----------------------+----------------------------------------------+------------+-----------+
+      | c1   | RUNNING | 10.166.11.178 (eth0) | 
2001:470:b368:4242:216:3eff:fefa:e5f8 (eth0) | PERSISTENT | 0         |
+      
+------+---------+----------------------+----------------------------------------------+------------+-----------+
+
+  #### リソース API の変更(ホストハードウェア) <!-- Reworked resources API (host hardware) -->
+  <!--
+  The resources API (/1.0/resources) has seen a lot of improvements as well as 
a re-design of the existing bits. Some of the changes include:
+  -->
+  リソース API(/1.0/resources)に多数の改良と既存のものの再設計を行いました。変更点は次の通りです:
+
+   - CPU
+     - NUMA ノードの報告の改良(コアごとになりました) <!-- Improved reporting of NUMA nodes (now 
per-core) -->
+     - 周波数レポートの改良(最小、現在値、ターボ周波数)<!-- Improved reporting of frequencies 
(minimum, current and turbo) -->
+     - キャッシュ情報のレポートの追加 <!-- Added cache information reporting -->
+     - すべてのコア・スレッドトポロジーの追加 <!-- Added full core/thread topology -->
+     - ID の追加(pinning に使用)<!-- Added ID (to use for pinning) -->
+     - アーキテクチャー名の追加 <!-- Added architecture name -->
+   - メモリー <!-- Memory -->
+     - NUMA ノードのレポートの追加 <!-- Added NUMA node reporting -->
+     - Hugepage トラッキングの追加 <!-- Added hugepages trtacking -->
+   - GPU
+     - DRM 情報用のサブセクションの追加 <!-- Added sub-section for DRM information -->
+     - DRM ドライバーにバインドされていないカードの検出をするようになりました <!-- Now detecting cards which 
aren't bound to a DRM driver -->
+     - GPU SR-IOV レポートのサポート <!-- Support for GPU SR-IOV reporting -->
+   - NIC
+     - イーサネットとインフィニバンドカードのレポートの追加 <!-- Added reporting of ethernet & 
infiniband cards -->
+     - SR-IOV のサポート <!-- Support for SR-IOV -->
+     - ポートごとのリンク情報 <!-- Per-port link information -->
+   - Disks
+     - ディスクレポートの追加 <!-- Added support for disk reporting -->
+     - バスタイプのレポート <!-- Bus type reporting -->
+     - パーティションリスト <!-- Partition list -->
+     - ディスク識別子(ベンダー、WWN、...) <!-- Disk identifiers (vendor, WWN, ...) -->
+
+  <!--
+  The `lxc info --resources` command was updated to match.
+  -->
+  これに合わせて `lxc info --resources` コマンドを更新しました。
+
+  <!--
+  **NOTE**: This version of the resources API isn't compatible with the 
previous one. The data structures had to change to properly handle more complex 
CPU topologies (like AMD Epyc) and couldn't be done in a properly backward 
compatible way. As a result, the command line client will detect the 
`resources_v2` API and fail for servers which do not support it.
+  -->
+  **注意**: このバージョンのリソース API は前のバージョンと互換性がありません。より複雑な(AMD Epyc のような)CPU 
トポロジーを適切に扱うために変更しなければならず、適切に後方互換性を保つようには行えませんでした。その結果、コマンドラインクライアントは 
`resources_v2` API を検出し、それをサポートしないサーバーでは失敗します。
+
+
+      root@athos:~# lxc info --resources
+      CPUs (x86_64):
+        Socket 0:
+          Vendor: GenuineIntel
+          Name: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
+          Caches:
+            - Level 1 (type: Data): 33kB
+            - Level 1 (type: Instruction): 33kB
+            - Level 2 (type: Unified): 262kB
+            - Level 3 (type: Unified): 31MB
+          Cores:
+            - Core 0
+              Frequency: 2814Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 0, online: true)
+                - 1 (id: 24, online: true)
+            - Core 1
+              Frequency: 2800Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 1, online: true)
+                - 1 (id: 25, online: true)
+            - Core 2
+              Frequency: 2652Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 2, online: true)
+                - 1 (id: 26, online: true)
+            - Core 3
+              Frequency: 2840Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 27, online: true)
+                - 1 (id: 3, online: true)
+            - Core 4
+              Frequency: 2613Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 28, online: true)
+                - 1 (id: 4, online: true)
+            - Core 5
+              Frequency: 2811Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 29, online: true)
+                - 1 (id: 5, online: true)
+            - Core 8
+              Frequency: 2710Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 30, online: true)
+                - 1 (id: 6, online: true)
+            - Core 9
+              Frequency: 2807Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 31, online: true)
+                - 1 (id: 7, online: true)
+            - Core 10
+              Frequency: 2805Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 32, online: true)
+                - 1 (id: 8, online: true)
+            - Core 11
+              Frequency: 2874Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 33, online: true)
+                - 1 (id: 9, online: true)
+            - Core 12
+              Frequency: 2936Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 10, online: true)
+                - 1 (id: 34, online: true)
+            - Core 13
+              Frequency: 2819Mhz
+              NUMA node: 0
+              Threads:
+                - 0 (id: 11, online: true)
+                - 1 (id: 35, online: true)
+          Frequency: 2790Mhz (min: 1200Mhz, max: 3200Mhz)
+        Socket 1:
+          Vendor: GenuineIntel
+          Name: Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz
+          Caches:
+            - Level 1 (type: Data): 33kB
+            - Level 1 (type: Instruction): 33kB
+            - Level 2 (type: Unified): 262kB
+            - Level 3 (type: Unified): 31MB
+          Cores:
+            - Core 0
+              Frequency: 1762Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 12, online: true)
+                - 1 (id: 36, online: true)
+            - Core 1
+              Frequency: 2440Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 13, online: true)
+                - 1 (id: 37, online: true)
+            - Core 2
+              Frequency: 1845Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 14, online: true)
+                - 1 (id: 38, online: true)
+            - Core 3
+              Frequency: 2899Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 15, online: true)
+                - 1 (id: 39, online: true)
+            - Core 4
+              Frequency: 2727Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 16, online: true)
+                - 1 (id: 40, online: true)
+            - Core 5
+              Frequency: 2345Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 17, online: true)
+                - 1 (id: 41, online: true)
+            - Core 8
+              Frequency: 1931Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 18, online: true)
+                - 1 (id: 42, online: true)
+            - Core 9
+              Frequency: 1959Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 19, online: true)
+                - 1 (id: 43, online: true)
+            - Core 10
+              Frequency: 2137Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 20, online: true)
+                - 1 (id: 44, online: true)
+            - Core 11
+              Frequency: 3065Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 21, online: true)
+                - 1 (id: 45, online: true)
+            - Core 12
+              Frequency: 2603Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 22, online: true)
+                - 1 (id: 46, online: true)
+            - Core 13
+              Frequency: 2543Mhz
+              NUMA node: 1
+              Threads:
+                - 0 (id: 23, online: true)
+                - 1 (id: 47, online: true)
+          Frequency: 2354Mhz (min: 1200Mhz, max: 3200Mhz)
+      
+      Memory:
+        Hugepages:
+          Free: 0B
+          Used: 171.80GB
+          Total: 171.80GB
+        NUMA nodes:
+          Node 0:
+            Hugepages:
+              Free: 0B
+              Used: 85.90GB
+              Total: 85.90GB
+            Free: 119.93GB
+            Used: 150.59GB
+            Total: 270.52GB
+          Node 1:
+            Hugepages:
+              Free: 0B
+              Used: 85.90GB
+              Total: 85.90GB
+            Free: 127.28GB
+            Used: 143.30GB
+            Total: 270.58GB
+        Free: 250.14GB
+        Used: 290.96GB
+        Total: 541.10GB
+      
+      GPUs:
+        Card 0:
+          NUMA node: 0
+          Vendor: Matrox Electronics Systems Ltd. (102b)
+          Product: MGA G200eW WPCM450 (0532)
+          PCI address: 0000:08:03.0
+          Driver: mgag200 (5.0.0-20-generic)
+          DRM:
+            ID: 0
+            Card: card0 (226:0)
+            Control: controlD64 (226:0)
+        Card 1:
+          NUMA node: 1
+          Vendor: NVIDIA Corporation (10de)
+          Product: GK208B [GeForce GT 730] (1287)
+          PCI address: 0000:82:00.0
+          Driver: vfio-pci (0.2)
+        Card 2:
+          NUMA node: 1
+          Vendor: NVIDIA Corporation (10de)
+          Product: GK208B [GeForce GT 730] (1287)
+          PCI address: 0000:83:00.0
+          Driver: vfio-pci (0.2)
+      
+      NICs:
+        Card 0:
+          NUMA node: 0
+          Vendor: Intel Corporation (8086)
+          Product: I350 Gigabit Network Connection (1521)
+          PCI address: 0000:02:00.0
+          Driver: igb (5.4.0-k)
+          Ports:
+            - Port 0 (ethernet)
+              ID: eth0
+              Address: 00:25:90:ef:ff:31
+              Supported modes: 10baseT/Half, 10baseT/Full, 100baseT/Half, 
100baseT/Full, 1000baseT/Full
+              Supported ports: twisted pair
+              Port type: twisted pair
+              Transceiver type: internal
+              Auto negotiation: true
+              Link detected: true
+              Link speed: 1000Mbit/s (full duplex)
+          SR-IOV information:
+            Current number of VFs: 7
+            Maximum number of VFs: 7
+            VFs: 7
+            - NUMA node: 0
+              Vendor: Intel Corporation (8086)
+              Product: I350 Ethernet Controller Virtual Function (1520)
+              PCI address: 0000:02:10.0
+              Driver: igbvf (2.4.0-k)
+              Ports:
+                - Port 0 (ethernet)
+                  ID: enp2s16
+                  Address: 72:10:a0:ca:e7:e1
+                  Auto negotiation: false
+                  Link detected: false
+            - NUMA node: 0
+              Vendor: Intel Corporation (8086)
+              Product: I350 Ethernet Controller Virtual Function (1520)
+              PCI address: 0000:02:10.4
+              Driver: igbvf (2.4.0-k)
+              Ports:
+                - Port 0 (ethernet)
+                  ID: enp2s16f4
+                  Address: 3e:fa:1d:b2:17:5e
+                  Auto negotiation: false
+                  Link detected: false
+            - NUMA node: 0
+              Vendor: Intel Corporation (8086)
+              Product: I350 Ethernet Controller Virtual Function (1520)
+              PCI address: 0000:02:11.0
+              Driver: igbvf (2.4.0-k)
+              Ports:
+                - Port 0 (ethernet)
+                  ID: enp2s17
+                  Address: 36:33:bf:74:89:8e
+                  Auto negotiation: false
+                  Link detected: false
+            - NUMA node: 0
+              Vendor: Intel Corporation (8086)
+              Product: I350 Ethernet Controller Virtual Function (1520)
+              PCI address: 0000:02:11.4
+              Driver: igbvf (2.4.0-k)
+              Ports:
+                - Port 0 (ethernet)
+                  ID: enp2s17f4
+                  Address: 86:a4:f0:b5:2f:e1
+                  Auto negotiation: false
+                  Link detected: false
+            - NUMA node: 0
+              Vendor: Intel Corporation (8086)
+              Product: I350 Ethernet Controller Virtual Function (1520)
+              PCI address: 0000:02:12.0
+              Driver: igbvf (2.4.0-k)
+              Ports:
+                - Port 0 (ethernet)
+                  ID: enp2s18
+                  Address: 56:0a:5a:0c:e7:ff
+                  Auto negotiation: false
+                  Link detected: false
+            - NUMA node: 0
+              Vendor: Intel Corporation (8086)
+              Product: I350 Ethernet Controller Virtual Function (1520)
+              PCI address: 0000:02:12.4
+              Driver: igbvf (2.4.0-k)
+              Ports:
+                - Port 0 (ethernet)
+                  ID: enp2s18f4
+                  Address: 0a:a9:b3:21:13:8c
+                  Auto negotiation: false
+                  Link detected: false
+            - NUMA node: 0
+              Vendor: Intel Corporation (8086)
+              Product: I350 Ethernet Controller Virtual Function (1520)
+              PCI address: 0000:02:13.0
+              Driver: igbvf (2.4.0-k)
+              Ports:
+                - Port 0 (ethernet)
+                  ID: enp2s19
+                  Address: ae:1a:db:06:8a:51
+                  Auto negotiation: false
+                  Link detected: false
+        Card 1:
+          NUMA node: 0
+          Vendor: Intel Corporation (8086)
+          Product: I350 Gigabit Network Connection (1521)
+          PCI address: 0000:02:00.1
+          Driver: igb (5.4.0-k)
+          Ports:
+            - Port 0 (ethernet)
+              ID: eth1
+              Address: 00:25:90:ef:ff:31
+              Supported modes: 10baseT/Half, 10baseT/Full, 100baseT/Half, 
100baseT/Full, 1000baseT/Full
+              Supported ports: twisted pair
+              Port type: twisted pair
+              Transceiver type: internal
+              Auto negotiation: true
+              Link detected: true
+              Link speed: 1000Mbit/s (full duplex)
+          SR-IOV information:
+            Current number of VFs: 0
+            Maximum number of VFs: 7
+      
+      Disks:
+        Disk 0:
+          NUMA node: 0
+          ID: nvme0n1
+          Device: 259:0
+          Model: INTEL SSDPEKNW020T8
+          Type: nvme
+          Size: 2.05TB
+          WWN: eui.0000000001000000e4d25c8b7c705001
+          Read-Only: false
+          Removable: false
+          Partitions:
+            - Partition 1
+              ID: nvme0n1p1
+              Device: 259:1
+              Read-Only: false
+              Size: 52.43MB
+            - Partition 2
+              ID: nvme0n1p2
+              Device: 259:2
+              Read-Only: false
+              Size: 26.84GB
+            - Partition 3
+              ID: nvme0n1p3
+              Device: 259:3
+              Read-Only: false
+              Size: 8.59GB
+            - Partition 4
+              ID: nvme0n1p4
+              Device: 259:4
+              Read-Only: false
+              Size: 53.69GB
+            - Partition 5
+              ID: nvme0n1p5
+              Device: 259:5
+              Read-Only: false
+              Size: 1.96TB
+        Disk 1:
+          NUMA node: 0
+          ID: nvme1n1
+          Device: 259:6
+          Model: INTEL SSDPEKNW020T8
+          Type: nvme
+          Size: 2.05TB
+          WWN: eui.0000000001000000e4d25cca7c705001
+          Read-Only: false
+          Removable: false
+          Partitions:
+            - Partition 1
+              ID: nvme1n1p1
+              Device: 259:7
+              Read-Only: false
+              Size: 52.43MB
+            - Partition 2
+              ID: nvme1n1p2
+              Device: 259:8
+              Read-Only: false
+              Size: 26.84GB
+            - Partition 3
+              ID: nvme1n1p3
+              Device: 259:9
+              Read-Only: false
+              Size: 8.59GB
+            - Partition 4
+              ID: nvme1n1p4
+              Device: 259:10
+              Read-Only: false
+              Size: 53.69GB
+            - Partition 5
+              ID: nvme1n1p5
+              Device: 259:11
+              Read-Only: false
+              Size: 1.96TB
+        Disk 2:
+          NUMA node: 0
+          ID: sda
+          Device: 8:0
+          Model: WDC WD60EFRX-68M
+          Type: scsi
+          Size: 6.00TB
+          Read-Only: false
+          Removable: false
+          Partitions:
+            - Partition 1
+              ID: sda1
+              Device: 8:1
+              Read-Only: false
+              Size: 6.00TB
+            - Partition 9
+              ID: sda9
+              Device: 8:9
+              Read-Only: false
+              Size: 8.39MB
+        Disk 3:
+          NUMA node: 0
+          ID: sdb
+          Device: 8:16
+          Model: WDC WD60EFRX-68M
+          Type: scsi
+          Size: 6.00TB
+          Read-Only: false
+          Removable: false
+          Partitions:
+            - Partition 1
+              ID: sdb1
+              Device: 8:17
+              Read-Only: false
+              Size: 6.00TB
+            - Partition 9
+              ID: sdb9
+              Device: 8:25
+              Read-Only: false
+              Size: 8.39MB
+        Disk 4:
+          NUMA node: 0
+          ID: sdc
+          Device: 8:32
+          Model: WDC WD60EFRX-68M
+          Type: scsi
+          Size: 6.00TB
+          Read-Only: false
+          Removable: false
+          Partitions:
+            - Partition 1
+              ID: sdc1
+              Device: 8:33
+              Read-Only: false
+              Size: 6.00TB
+            - Partition 9
+              ID: sdc9
+              Device: 8:41
+              Read-Only: false
+              Size: 8.39MB
+        Disk 5:
+          NUMA node: 0
+          ID: sdd
+          Device: 8:48
+          Model: WDC WD60EFRX-68L
+          Type: scsi
+          Size: 6.00TB
+          Read-Only: false
+          Removable: false
+          Partitions:
+            - Partition 1
+              ID: sdd1
+              Device: 8:49
+              Read-Only: false
+              Size: 6.00TB
+            - Partition 9
+              ID: sdd9
+              Device: 8:57
+              Read-Only: false
+              Size: 8.39MB
+        Disk 6:
+          NUMA node: 0
+          ID: sde
+          Device: 8:64
+          Model: CT1000MX500SSD1
+          Type: scsi
+          Size: 1.00TB
+          Read-Only: false
+          Removable: false
+          Partitions:
+            - Partition 1
+              ID: sde1
+              Device: 8:65
+              Read-Only: false
+              Size: 52.43MB
+            - Partition 2
+              ID: sde2
+              Device: 8:66
+              Read-Only: false
+              Size: 1.07GB
+            - Partition 3
+              ID: sde3
+              Device: 8:67
+              Read-Only: false
+              Size: 17.18GB
+            - Partition 4
+              ID: sde4
+              Device: 8:68
+              Read-Only: false
+              Size: 4.29GB
+            - Partition 5
+              ID: sde5
+              Device: 8:69
+              Read-Only: false
+              Size: 977.60GB
+        Disk 7:
+          NUMA node: 0
+          ID: sdf
+          Device: 8:80
+          Model: WDC WD60EFRX-68M
+          Type: scsi
+          Size: 6.00TB
+          Read-Only: false
+          Removable: false
+          Partitions:
+            - Partition 1
+              ID: sdf1
+              Device: 8:81
+              Read-Only: false
+              Size: 6.00TB
+            - Partition 9
+              ID: sdf9
+              Device: 8:89
+              Read-Only: false
+              Size: 8.39MB
+        Disk 8:
+          NUMA node: 0
+          ID: sdg
+          Device: 8:96
+          Model: WDC WD60EFRX-68M
+          Type: scsi
+          Size: 6.00TB
+          Read-Only: false
+          Removable: false
+          Partitions:
+            - Partition 1
+              ID: sdg1
+              Device: 8:97
+              Read-Only: false
+              Size: 6.00TB
+            - Partition 9
+              ID: sdg9
+              Device: 8:105
+              Read-Only: false
+              Size: 8.39MB
+        Disk 9:
+          NUMA node: 0
+          ID: sdh
+          Device: 8:112
+          Model: WDC WD60EFRX-68M
+          Type: scsi
+          Size: 6.00TB
+          Read-Only: false
+          Removable: false
+          Partitions:
+            - Partition 1
+              ID: sdh1
+              Device: 8:113
+              Read-Only: false
+              Size: 6.00TB
+            - Partition 9
+              ID: sdh9
+              Device: 8:121
+              Read-Only: false
+              Size: 8.39MB
+        Disk 10:
+          NUMA node: 0
+          ID: sdi
+          Device: 8:128
+          Model: WDC WD60EFRX-68M
+          Type: scsi
+          Size: 6.00TB
+          Read-Only: false
+          Removable: false
+          Partitions:
+            - Partition 1
+              ID: sdi1
+              Device: 8:129
+              Read-Only: false
+              Size: 6.00TB
+            - Partition 9
+              ID: sdi9
+              Device: 8:137
+              Read-Only: false
+              Size: 8.39MB
+
+
+  #### コマンド実行時のuid、gid、cwd の制御 <!-- Control over uid, gid and cwd during 
command execution -->
+  <!--
+  It is now possible to specify what user id (uid), group id (gid) or current 
working directory (cwd) to use for a particular command. Note that user names 
and group names aren't supported.
+  -->
+  特定のコマンドで使うためにユーザー ID(uid)、グループ 
ID(gid)、カレントワーキングディレクトリ(cwd)が指定できるようになりました。ユーザー名、グループ名の指定はできませんので注意してください。
+
+      stgraber@castiana:~$ lxc exec c1 --user 1000 --group 1000 --cwd /tmp -- 
bash
+      ubuntu@c1:/tmp$ id
+      uid=1000(ubuntu) gid=1000(ubuntu) groups=1000(ubuntu)
+      ubuntu@c1:/tmp$ 
+
+  #### `dir` バックエンド上のカスタムストレージボリュームでの quota のサポート <!-- Quota support for 
custom storage volumes on `dir` backend -->
+  <!--
+  When using a storage pool backend by the `dir` driver and with a source path 
that supports filesystem project quotas, it is now possible to set disk usage 
limits on custom volumes.
+  -->
+  `dir` ストレージプールバックエンドを使っている場合で、ファイルシステムがプロジェクト quota 
をサポートしているソースパスである場合に、カスタムボリュームにディスク使用量の制限を設定できるようになりました。
+
+      stgraber@castiana:~$ sudo truncate -s 100G test.img
+      stgraber@castiana:~$ sudo mkfs.ext4 test.img
+      mke2fs 1.45.2 (27-May-2019)
+      Discarding device blocks: done                            
+      Creating filesystem with 26214400 4k blocks and 6553600 inodes
+      Filesystem UUID: 50ee78cb-e4e3-4e09-b38b-3fb06c6740a4
+      Superblock backups stored on blocks: 
+       32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
+       4096000, 7962624, 11239424, 20480000, 23887872
+      
+      Allocating group tables: done                            
+      Writing inode tables: done                            
+      Creating journal (131072 blocks): done
+      Writing superblocks and filesystem accounting information: done   
+      stgraber@castiana:~$ sudo tune2fs -O project -Q prjquota test.img
+      tune2fs 1.45.2 (27-May-2019)
+      stgraber@castiana:~$ sudo mkdir /mnt/test
+      stgraber@castiana:~$ sudo mount -o prjquota test.img /mnt/test
+      stgraber@castiana:~$ sudo rmdir /mnt/test/lost+found
+      
+      stgraber@castiana:~$ lxc storage create dir dir source=/mnt/test
+      Storage pool dir created
+      stgraber@castiana:~$ lxc storage volume create dir blah
+      Storage volume blah created
+      stgraber@castiana:~$ lxc storage volume attach dir blah c1 blah /blah
+      
+      stgraber@castiana:~$ lxc exec c1 -- df -h /blah
+      Filesystem      Size  Used Avail Use% Mounted on
+      /dev/loop32      98G   61M   93G   1% /blah
+      stgraber@castiana:~$ lxc storage volume set dir blah size 10GB
+      stgraber@castiana:~$ lxc exec c1 -- df -h /blah
+      Filesystem      Size  Used Avail Use% Mounted on
+      /dev/loop32     9.4G  4.0K  9.4G   1% /blah
+
+  ### バグ修正(翻訳なし)<!-- Bugs fixed -->
+
+   - client: Move to units package
+   - doc: Fix underscore escaping
+   - doc/devlxd: Fix path to host's communication socket
+   - doc/README: Add basic install instructions
+   - doc/README: Update linker flags
+   - i18n: Update translations from weblate
+   - i18n: Update translation templates
+   - lxc: Fix renaming storage volume snapshots
+   - lxc: Move to units package
+   - lxc/copy: Always strip volatile.last_state.power
+   - lxc/export: Expire the backup after 24 hours
+   - lxd: Better handle bad commands
+   - lxd: Fix renaming volume snapshots
+   - lxd: Move to units package
+   - lxd: Use RunCommandSplit when needed
+   - lxd/api: Update handler funcs to take nodeRefreshFunc
+   - lxd/cluster: Always return node list on rebalance
+   - lxd/cluster: Better handle DB node removal
+   - lxd/cluster: Export some heartbeat code
+   - lxd/cluster: Perform heartbeats only on the leader
+   - lxd/cluster: Update HandlerFuncs calls in tests
+   - lxd/cluster: Update heartbeat test to pass last leader heartbeat time
+   - lxd/cluster: Update tests not to use KeepUpdated in tests
+   - lxd/cluster: Use correct node id on promote
+   - lxd/cluster/gateway: Update to receive new heartbeat format
+   - lxd/cluster/heartbeat: Add new heartbeat request format
+   - lxd/cluster/heartbeat: Compare both ID and Address
+   - lxd/cluster/heartbeat: Fix bug when nodes join during heartbeat
+   - lxd/cluster/heartbeat: Remove unneeded go routine (as context does cancel)
+   - lxd/cluster/heartbeat: Use current timestamp for DB record
+   - lxd/cluster/membership: Update Join to send new heartbeat format
+   - lxd/cluster/upgrade: Remove KeepUpdated and use MayUpdate directly
+   - lxd/cluster/upgrade: Remove unused context
+   - lxd/cluster/upgrade: Remove unused context from test
+   - lxd/containers: Add allocateNetworkFilterIPs
+   - lxd/containers: Add error checking for calls to networkClearLease
+   - lxd/containers: Add SR-IOV parent restoration
+   - lxd/containers: Better detect and alert on missing br_netfilter module
+   - lxd/containers: Combine state updates
+   - lxd/containers: Consistent comment endings
+   - lxd/containers: Disable auto mac generation for sriov devices
+   - lxd/containers: Ensure dnsmasq config refresh if bridge nic added/removed
+   - lxd/containers: Ensure that sriov devices use volatile host_name for 
removal
+   - lxd/containers: Fix return value of detachInterfaceRename
+   - lxd/containers: Fix showing host_name of veth pair in lxc info
+   - lxd/containers: Fix snapshot restore on ephemeral
+   - lxd/containers: Fix template handling
+   - lxd/containers: generateNetworkFilterEbtablesRules to accept IP info as 
args
+   - lxd/containers: generateNetworkFilterIptablesRules to accept IP info as 
args
+   - lxd/containers: Improve comment on DHCP host config removal
+   - lxd/containers: Made detection of veth nic explicit
+   - lxd/containers: Move all nic hot plug functionality into separate 
functions
+   - lxd/containers: Move container taring logic into standalone class
+   - lxd/containers: Move network filter setup into setupHostVethDevice
+   - lxd/containers: Move stop time nic device detach into 
cleanupNetworkDevices
+   - lxd/containers: Remove containerNetworkKeys as unused
+   - lxd/containers: Remove ineffective references to containerNetworkKeys
+   - lxd/containers: Remove the need for fixed veth peer when doing 
mac_filtering
+   - lxd/containers: Remove unused arg from setNetworkRoutes
+   - lxd/containers: Separate cleanupHostVethDevices into cleanupHostVethDevice
+   - lxd/containers: Speed up startCommon a bit
+   - lxd/containers: Update removeNetworkFilters to use dnsmasq config
+   - lxd/containers: Update setNetworkFilters to allocate IPs if needed
+   - lxd/containers: Update setupHostVethDevice to wipe old DHCPv6 leases
+   - lxd/containers: Use current binary for early hooks
+   - lxd/daemon: Update daemon to support node refresh tasks from heartbeat
+   - lxd/db: Add Gateway.isLeader() function
+   - lxd/db: Better formatting
+   - lxd/db: Bootstrap dqlite for new servers
+   - lxd/db: Check dqlite version of connecting nodes
+   - lxd/db: Check TLS cert in raft connection handler
+   - lxd/db: Conditionally check leadership in dqlite dial function
+   - lxd/db: Convert tests to the new go-dqlite API
+   - lxd/db: Copy network data between TLS Go conn and Unix socket
+   - lxd/db: Custom dqlite dial function
+   - lxd/db: Don't use the db in legacy patch 12
+   - lxd/db: Drop dependencies on hashicorp/raft
+   - lxd/db: Drop hashicorp/raft setup code
+   - lxd/db: Drop the legacy /internal/raft endpoint
+   - lxd/db: Drop unused hashicorp/raft network transport wrapper
+   - lxd/db: Fix comment
+   - lxd/db: Fix import
+   - lxd/db: Fix lint
+   - lxd/db: Get information about current servers from dqlite
+   - lxd/db: Ignore missing WAL files when reproducing snapshots
+   - lxd/db: Improve gateway standalone test
+   - lxd/db: Instantiate dqlite
+   - lxd/db: Move container list from containersShutdown into containersOnDisk
+   - lxd/db: No need to shutdown hashicorp/raft instance
+   - lxd/db: Only use the schema db transaction in legacy patches
+   - lxd/db: Perform data migration to dqlite 1.0 format
+   - lxd/db: Retry copy-related errors
+   - lxd/db: Return HTTP code 426 (Upgrade Required) if peer has old version
+   - lxd/db: Set max open conns before running schema upgrades
+   - lxd/db: Translate address of first node
+   - lxd/db: Turn patchShrinkLogsDBFile into a no-op
+   - lxd/db: Update comment
+   - lxd/db: Update docstring
+   - lxd/db: Update unit tests
+   - lxd/db: Use dqlite leave primitive
+   - lxd/db: Use dqlite's join primitive
+   - lxd/db: Use ID instead of address to detect initial node
+   - lxd/db: Wire isLeader()
+   - lxd/instance_types: Improve errors
+   - lxd/main: Fix debug mode flag to actually enable debug mode
+   - lxd/main: Fix test runner by allowing empty command arg
+   - lxd/main_callhook: Don't call /1.0
+   - lxd/main_checkfeature: Remove unused variable
+   - lxd/main_forkmknod: Check for MS_NODEV
+   - lxd/main_forkmknod: Correctly handle shiftfs
+   - lxd/main_forkmknod: Ensure correct device ownership
+   - lxd/main_forkmknod: Remove unused variables
+   - lxd/main_forkmknod: Simplify
+   - lxd/main_forknet: Clean up forknet detach error logging and output
+   - lxd/networks: Add DHCP range functions
+   - lxd/networks: Add --dhcp-rapid-commit when dnsmasq version > 2.79
+   - lxd/networks: Add IP allocation functions
+   - lxd/networks: Add networkDeviceBindWait function
+   - lxd/networks: Add networkDHCPv4Release function
+   - lxd/networks: Add networkDHCPv6Release function and associated packet 
helper
+   - lxd/networks: Add networkGetVirtFuncInfo function
+   - lxd/networks: Add networkUpdateStaticContainer
+   - lxd/networks: Add SR-IOV related PCI bind/unbind helper functions
+   - lxd/networks: Allow querying state on non-managed
+   - lxd/networks: Call networkUpdateForkdnsServersTask from node refresh
+   - lxd/networks: Cleaned up the device bind/unbind functions for SR-IOV
+   - lxd/networks: Fix bug preventing 3rd party routes restoration on startup
+   - lxd/networks: Remove unused context
+   - lxd/networks: Remove unused state.State from networkClearLease()
+   - lxd/networks: Start dnsmasq with --no-ping option to avoid delayed writes
+   - lxd/networks: Update networkClearLease to support a mode flag
+   - lxd/networks: Update networkClearLease to use DHCP release helpers
+   - lxd/networks: Update networkUpdateStatic to use existing config for 
filters
+   - lxd/networks: Update networkUpdateStatic to use 
networkUpdateStaticContainer
+   - lxd/networks: Update refreshForkdnsServerAddresses to run from node 
refresh
+   - lxd/patches: Handle btrfs snapshots properly
+   - lxd/proxy: Fix error handling for unix
+   - lxd/rsync: Allow disabling xattrs during copy
+   - lxd/rsync: Don't double-specify --xattrs
+   - lxd/seccomp: Add insertMount() helpers
+   - lxd/seccomp: Cause a default message to be sent
+   - lxd/seccomp: Check permissions before handling mknod via device injection
+   - lxd/seccomp: Cleanup + simplify
+   - lxd/seccomp: Define __NR_mknod if missing
+   - lxd/seccomp: Ensure correct owner on __NR_mknod{at}
+   - lxd/seccomp: Fix error reporting
+   - lxd/seccomp: Handle compat arch syscalls
+   - lxd/seccomp: Handle new liblxc seccomp notify updates
+   - lxd/seccomp: Retry with mount hotplug
+   - lxd/seccomp: Rework missing syscall number definitions
+   - lxd/seccomp: Simplify and make more secure
+   - lxd/storage: Fix copies of volumes with snapshots
+   - lxd/storage/ceph: Fix snapshot deletion cleanup
+   - lxd/storage/dir: Allow size limits on dir volumes
+   - lxd/storage/dir: Fix quotas on dir
+   - lxd/storage/dir: Fix some deletion cases
+   - lxd/storage/lvm: Adds space used reporting for LVM thinpools
+   - lxd/task/group: Improve locking of Start/Add/Stop functions to avoid races
+   - Makefile: Update make deps to build also libco and raft
+   - shared: Add volatile key suffixes for SR-IOV
+   - shared: Better handle stdout/stderr in RunCommand
+   - shared: Move to units package
+   - shared/netutils: Add lxc_abstract_unix_recv_fds_iov()
+   - shared/netutils: Fix bug with getting container PID
+   - shared/termios: Fix port to sys/unix
+   - shared/units: Move unit functions
+   - tests: Add check for dnsmasq host config file removal on container delete
+   - tests: Add DHCP lease release tests
+   - tests: Add p2p test for adding new nic rather than updating existing
+   - tests: Add SR-IOV tests
+   - tests: Add test for dnsmasq host config update when nic added/removed
+   - tests: Add tests for security.mac_filtering functionality
+   - tests: Always pass --force to stop/restart
+   - tests: Don't leak remotes in tests
+   - tests: Fix bad call to spawn_lxd
+   - tests: Fix typo in test/suites/clustering.sh
+   - tests: Increase nic bridge ping sleep time to 2s
+   - tests: Make new shellcheck happy
+   - tests: Make shellcheck happy
+   - tests: Optimize ceph storage test
+   - tests: Properly scope LXD_NETNS
+   - tests: Remove un-needed LXD_DIR
+   - tests: Re-order tests a bit
+   - tests: Scope cluster LXD variables
+   - tests: Test renaming storage volume snapshots
+   - tests: Update godeps
+   - tests: Update nic bridge tests to check for route restoration
+   - various: Removes use of golang.org/x/net/context in place of stdlib 
context
+   - vendor: Drop vendor directory
+
+  ### 試用環境 <!-- Try it for yourself -->
+  <!--
+  This new LXD release is already available for you to try on our [demo 
service](https://linuxcontainers.org/lxd/try-it/).
+  -->
+  この新しい LXD リリースは私たちの [デモサービス](https://linuxcontainers.org/ja/lxd/try-it/) 
で利用できます。
+
+  ### ダウンロード <!-- Downloads -->
+  <!--
+  The release tarballs can be found on our [download 
page](https://linuxcontainers.org/lxd/downloads/).
+  -->
+  このリリースの tarball は [ダウンロードページ](/lxd/downloads/) から取得できます。
+
+  <!--
+  Binary builds are also available for:
+  -->
+  ビルド済みバイナリーは次のように使えます:
+
+   - **Linux:** snap install lxd
+   - **MacOS:** brew install lxc
+   - **Windows:** choco install lxc

From 57a931d39e4c81c94431ec4da9a384bb548cc317 Mon Sep 17 00:00:00 2001
From: KATOH Yasufumi <ka...@jazz.email.ne.jp>
Date: Thu, 18 Jul 2019 19:38:05 +0900
Subject: [PATCH 2/2] Fix typo

Reported-by: Hiroaki Nakamura <hnaka...@gmail.com>
Signed-off-by: KATOH Yasufumi <ka...@jazz.email.ne.jp>
---
 content/lxd/news.ja/lxd-3.15.yaml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/content/lxd/news.ja/lxd-3.15.yaml 
b/content/lxd/news.ja/lxd-3.15.yaml
index 4ed8ca9..99966ed 100644
--- a/content/lxd/news.ja/lxd-3.15.yaml
+++ b/content/lxd/news.ja/lxd-3.15.yaml
@@ -167,7 +167,7 @@ content: |-
   <!--
   Ceph FS was added as a storage driver for LXD. Support is limited to custom 
storage volumes though, containers will not be allowed on Ceph FS and it's 
indeed recommended to use Ceph RBD for them.
   -->
-  Ceph FS が LXD のストレージドライバとして追加されました。使用はカスタムストレージボリュームに限定されていますので、Ceph FS 
上にコンテナ置くことはできません。コンテナには Ceph RBD を使うことをおすすめします。
+  Ceph FS が LXD のストレージドライバとして追加されました。使用はカスタムストレージボリュームに限定されていますので、Ceph FS 
上にコンテナを置くことはできません。コンテナには Ceph RBD を使うことをおすすめします。
 
   <!--
   Ceph FS support includes size restrictions (quota) and native snapshot 
supports when the server, server configuration and client kernel support those 
features.
_______________________________________________
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel

Reply via email to