Re: [Qemu-devel] [PATCH] HLFS driver for QEMU

2013-03-28 Thread harryxiyou
On Mon, Mar 18, 2013 at 7:10 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
[...]
 read/write/flush should be either .bdrv_co_* or .bdrv_aio_*.

 The current code pauses the guest while I/O is in progress!  Try running
 disk I/O benchmarks inside the guest and you'll see that performance and
 interactivity are poor.

 QEMU has two ways of implementing efficient block drivers:

 1. Coroutines - .bdrv_co_*

 Good for image formats or block drivers that have internal logic.

 Each request runs inside a coroutine - see include/block/coroutine.h.
 In order to wait for I/O, submit an asynchronous request or worker
 thread and yield the coroutine.  When the request completes, re-enter
 the coroutine and continue.

 Examples: block/sheepdog.c or block/qcow2.c.

 2. Asynchronous I/O - .bdrv_aio_*

 Good for low-level block drivers that have little logic.

 The request processing code is split into callbacks.  I/O requests are
 submitted and then the code returns back to QEMU's main loop.  When the
 I/O request completes, a callback is invoked.

 Examples: block/rbd.c or block/qed.c.


HLFS do not use QEMU's AIO way. HLFS, itself,  has realized internal AIO.

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [PATCH] HLFS driver for QEMU

2013-03-20 Thread harryxiyou
On Mon, Mar 18, 2013 at 11:16 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
[...]
 I looked at the Google Code project before, it looks like a repo that
 a few people are hacking on.  The site is developer-focussed and there
 is no evidence of users.  This is why I asked about the background of
 the community.


Cloudxy is developing and we will popularize it.

 There are references to workshop3 and linux lab in the source
 code.  Xiyou and XUPT is http://www.xiyou.edu.cn/.


Some developers are from XUPT.

 What I'm concerned about is that there are no users and the developers
 leave when they graduate.  In that case integrating the patches into
 QEMU is wasted effort.


There are some developers from XUPT but not *ALL*. Be sure that our key
developers will maintain this project forever like Kang Hua
kanghua...@gmail.com,
Zhang Dianbo littlesmartsm...@gmail.com, Wang Sen kelvin.x...@gmail.com
and Harry Wei harryxi...@gmail.com.

We will broadcast our project so that there will be more users.

 So what *is* the background of cloudxy (HLFS)?

Be sure that we have enough developers. Users will be more if
Cloudxy has more influences.



-- 
Thanks
Harry Wei



Re: [Qemu-devel] [PATCH] HLFS driver for QEMU

2013-03-20 Thread harryxiyou
On Wed, Mar 20, 2013 at 6:04 PM, Stefan Hajnoczi stefa...@redhat.com wrote:
 On Wed, Mar 20, 2013 at 04:43:17PM +0800, harryxiyou wrote:
 On Mon, Mar 18, 2013 at 11:16 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
 [...]
  I looked at the Google Code project before, it looks like a repo that
  a few people are hacking on.  The site is developer-focussed and there
  is no evidence of users.  This is why I asked about the background of
  the community.
 

 Cloudxy is developing and we will popularize it.

 Most important for QEMU is easy building and testing, so let's focus on
 that.

Yup, this is wonderful ;-)


 Ceph and GlusterFS have packages available in popular distros.  This
 allows us to hook them into QEMU's buildbot
 (http://buildbot.b1-systems.de/qemu/grid) so their code does not bitrot.

We will make debian and RPM packages.


 Sheepdog's QEMU block driver has no external dependencies so it's even
 easier to build.

Yup.

 I'd like to see packages for popular distros and a Getting Started
 page for users instead of developers.  Then I can follow the steps to
 set up HLFS using the packages and the QEMU buildbot can build the HLFS
 block driver.


Actually, we have Get started wiki pages in Chinese and i will give them in
English. You can visit Chinese Get started wiki pages by following link.
http://code.google.com/p/cloudxy/wiki/HlfsUserManual

We also have one key install script, which is located here.
http://cloudxy.googlecode.com/svn/branches/hlfs/person/zhangdianbo/scripts/

All the wiki pages for Cloudxy are here(In Chinese).
http://code.google.com/p/cloudxy/w/list



-- 
Thanks
Harry Wei



[Qemu-devel] [PATCH] HLFS driver for QEMU

2013-03-18 Thread harryxiyou
From: Harry Wei harryxi...@gmail.com

HLFS is HDFS-based(Hadoop Distributed File System) Log-Structured File
System. Actually, HLFS, currently, is not a FS but a block-storage system,
which we simplify LFS to fit block-level storage. So you could also call HLFS
as HLBS (HDFS-based Log-Structured Block-storage System).  HLFS has
two mode, which are local mode and HDFS mode. HDFS is once write and
many read so HLFS realize LBS(Log-Structured Block-storage System) to
achieve reading and writing randomly. LBS is based on LFS's basic theories
but is different from LFS, which LBS fits block storage better. See
http://code.google.com/p/cloudxy/wiki/WHAT_IS_CLOUDXY
about HLFS in details.

Currently, HLFS support following features:

1, Portions of POSIX --- Just realize some interfaces VM image need.
2, Randomly Read/Write.
3, Large file storage (TB).
4, Support snapshots(Linear snapshots and tree snapshots), Clone,
Block compression, Cache, etc.
5, A copy of the data more.
6, Cluster system can dynamic expand.
...

Signed-off-by: Harry Wei harryxi...@gmail.com

---
 block/Makefile.objs |2 +-
 block/hlfs.c|  515 +++
 configure   |   51 +
 3 files changed, 567 insertions(+), 1 deletion(-)
 create mode 100644 block/hlfs.c

diff --git a/block/Makefile.objs b/block/Makefile.objs
index c067f38..723c7a5 100644
--- a/block/Makefile.objs
+++ b/block/Makefile.objs
@@ -8,7 +8,7 @@ block-obj-$(CONFIG_POSIX) += raw-posix.o
 block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
 
 ifeq ($(CONFIG_POSIX),y)
-block-obj-y += nbd.o sheepdog.o
+block-obj-y += nbd.o sheepdog.o hlfs.o
 block-obj-$(CONFIG_LIBISCSI) += iscsi.o
 block-obj-$(CONFIG_CURL) += curl.o
 block-obj-$(CONFIG_RBD) += rbd.o
diff --git a/block/hlfs.c b/block/hlfs.c
new file mode 100644
index 000..331feae
--- /dev/null
+++ b/block/hlfs.c
@@ -0,0 +1,514 @@
+/*
+ * Block driver for HLFS(HDFS-based Log-structured File System)
+ *
+ * Copyright (c) 2013, Kang Hua kanghua...@gmail.com
+ * Copyright (c) 2013, Wang Sen kelvin.x...@gmail.com
+ * Copyright (c) 2013, Harry Wei harryxi...@gmail.com
+ *
+ * This program is free software. You can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * Reference:
+ * http://code.google.com/p/cloudxy
+ */
+
+#include qemu-common.h
+#include qemu/error-report.h
+#include qemu/sockets.h
+#include block/block_int.h
+#include qemu/bitops.h
+#include api/hlfs.h
+#include storage_helper.h
+#include comm_define.h
+#include snapshot_helper.h
+#include address.h
+
+#define DEBUG_HLBS
+#undef dprintf
+#ifdef DEBUG_HLBS
+#define dprintf(fmt, args...) \
+do {\
+fprintf(stdout, %s %d:  fmt, __func__, __LINE__, ##args); \
+} while (0)
+#else
+#define dprintf(fmt, args...)
+#endif
+
+#define HLBS_MAX_VDI_SIZE (8192ULL*8192ULL*8192ULL*8192ULL)
+#define SECTOR_SIZE 512
+
+typedef struct BDRVHLBSState {
+struct hlfs_ctrl *hctrl;
+char *snapshot;
+char *uri;
+} BDRVHLBSState;
+
+/*
+ * Parse a filename.
+ *
+ * file name format must be one of the following:
+ *1. [vdiname]
+ *2. [vdiname]%[snapshot]
+ ** vdiname format --
+ ** local:///tmp/testenv/testfs
+ ** hdfs:///tmp/testenv/testfs
+ ** hdfs://localhost:8020/tmp/testenv/testfs
+ ** hdfs://localhost/tmp/testenv/testfs
+ ** hdfs://192.168.0.1:8020/tmp/testenv/testfs
+ */
+
+static int parse_vdiname(BDRVHLBSState *s, const char *filename, char *vdi,
+char *snapshot)
+{
+if (!filename) {
+return -1;
+}
+
+gchar **v = g_strsplit(filename, %, 2);
+if (g_strv_length(v) == 1) {
+strcpy(vdi, v[0]);
+s-uri = g_strdup(vdi);
+} else if (g_strv_length(v) == 2) {
+strcpy(vdi, v[0]);
+strcpy(snapshot, v[1]);
+s-uri = g_strdup(vdi);
+s-snapshot = g_strdup(snapshot);
+} else {
+goto out;
+}
+
+return 0;
+out:
+g_strfreev(v);
+return -1;
+}
+
+static int hlbs_open(BlockDriverState *bs, const char *filename, int flags)
+{
+int ret = 0;
+BDRVHLBSState *s = bs-opaque;
+char vdi[256];
+char snapshot[HLFS_FILE_NAME_MAX];
+
+strstart(filename, hlfs:, (const char **)filename);
+memset(snapshot, 0, sizeof(snapshot));
+memset(vdi, 0, sizeof(vdi));
+
+if (parse_vdiname(s, filename, vdi, snapshot)  0) {
+goto out;
+}
+
+HLFS_CTRL *ctrl = init_hlfs(vdi);
+if (strlen(snapshot)) {
+dprintf(snapshot:%s was open.\n, snapshot);
+ret = hlfs_open_by_snapshot(ctrl, snapshot, 1);
+} else {
+ret = hlfs_open(ctrl, 1);
+}
+g_assert(ret == 0);
+s-hctrl = ctrl;
+bs-total_sectors = ctrl-sb.max_fs_size * 1024 * 1024 / SECTOR_SIZE;
+return 0;
+out:
+if (s-hctrl) {
+hlfs_close(s-hctrl);
+deinit_hlfs(s-hctrl);
+}
+return -1;
+}
+
+static int hlbs_create(const char *filename, 

[Qemu-devel] [PATCH] HLFS driver for QEMU

2013-03-18 Thread harryxiyou
From: Harry Wei harryxi...@gmail.com

HLFS is HDFS-based(Hadoop Distributed File System) Log-Structured File
System. Actually, HLFS, currently, is not a FS but a block-storage system,
which we simplify LFS to fit block-level storage. So you could also call HLFS
as HLBS (HDFS-based Log-Structured Block-storage System).  HLFS has
two mode, which are local mode and HDFS mode. HDFS is once write and
many read so HLFS realize LBS(Log-Structured Block-storage System) to
achieve reading and writing randomly. LBS is based on LFS's basic theories
but is different from LFS, which LBS fits block storage better. See
http://code.google.com/p/cloudxy/wiki/WHAT_IS_CLOUDXY
about HLFS in details.

Currently, HLFS support following features:

1, Portions of POSIX --- Just realize some interfaces VM image need.
2, Randomly Read/Write.
3, Large file storage (TB).
4, Support snapshots(Linear snapshots and tree snapshots), Clone,
Block compression, Cache, etc.
5, A copy of the data more.
6, Cluster system can dynamic expand.
...

Signed-off-by: Harry Wei harryxi...@gmail.com

---
 block/Makefile.objs |2 +-
 block/hlfs.c|  515 +++
 configure   |   51 +
 3 files changed, 567 insertions(+), 1 deletion(-)
 create mode 100644 block/hlfs.c

diff --git a/block/Makefile.objs b/block/Makefile.objs
index c067f38..723c7a5 100644
--- a/block/Makefile.objs
+++ b/block/Makefile.objs
@@ -8,7 +8,7 @@ block-obj-$(CONFIG_POSIX) += raw-posix.o
 block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
 
 ifeq ($(CONFIG_POSIX),y)
-block-obj-y += nbd.o sheepdog.o
+block-obj-y += nbd.o sheepdog.o hlfs.o
 block-obj-$(CONFIG_LIBISCSI) += iscsi.o
 block-obj-$(CONFIG_CURL) += curl.o
 block-obj-$(CONFIG_RBD) += rbd.o
diff --git a/block/hlfs.c b/block/hlfs.c
new file mode 100644
index 000..331feae
--- /dev/null
+++ b/block/hlfs.c
@@ -0,0 +1,514 @@
+/*
+ * Block driver for HLFS(HDFS-based Log-structured File System)
+ *
+ * Copyright (c) 2013, Kang Hua kanghua...@gmail.com
+ * Copyright (c) 2013, Wang Sen kelvin.x...@gmail.com
+ * Copyright (c) 2013, Harry Wei harryxi...@gmail.com
+ *
+ * This program is free software. You can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * Reference:
+ * http://code.google.com/p/cloudxy
+ */
+
+#include qemu-common.h
+#include qemu/error-report.h
+#include qemu/sockets.h
+#include block/block_int.h
+#include qemu/bitops.h
+#include api/hlfs.h
+#include storage_helper.h
+#include comm_define.h
+#include snapshot_helper.h
+#include address.h
+
+#define DEBUG_HLBS
+#undef dprintf
+#ifdef DEBUG_HLBS
+#define dprintf(fmt, args...) \
+do {\
+fprintf(stdout, %s %d:  fmt, __func__, __LINE__, ##args); \
+} while (0)
+#else
+#define dprintf(fmt, args...)
+#endif
+
+#define HLBS_MAX_VDI_SIZE (8192ULL*8192ULL*8192ULL*8192ULL)
+#define SECTOR_SIZE 512
+
+typedef struct BDRVHLBSState {
+struct hlfs_ctrl *hctrl;
+char *snapshot;
+char *uri;
+} BDRVHLBSState;
+
+/*
+ * Parse a filename.
+ *
+ * file name format must be one of the following:
+ *1. [vdiname]
+ *2. [vdiname]%[snapshot]
+ ** vdiname format --
+ ** local:///tmp/testenv/testfs
+ ** hdfs:///tmp/testenv/testfs
+ ** hdfs://localhost:8020/tmp/testenv/testfs
+ ** hdfs://localhost/tmp/testenv/testfs
+ ** hdfs://192.168.0.1:8020/tmp/testenv/testfs
+ */
+
+static int parse_vdiname(BDRVHLBSState *s, const char *filename, char *vdi,
+char *snapshot)
+{
+if (!filename) {
+return -1;
+}
+
+gchar **v = g_strsplit(filename, %, 2);
+if (g_strv_length(v) == 1) {
+strcpy(vdi, v[0]);
+s-uri = g_strdup(vdi);
+} else if (g_strv_length(v) == 2) {
+strcpy(vdi, v[0]);
+strcpy(snapshot, v[1]);
+s-uri = g_strdup(vdi);
+s-snapshot = g_strdup(snapshot);
+} else {
+goto out;
+}
+
+return 0;
+out:
+g_strfreev(v);
+return -1;
+}
+
+static int hlbs_open(BlockDriverState *bs, const char *filename, int flags)
+{
+int ret = 0;
+BDRVHLBSState *s = bs-opaque;
+char vdi[256];
+char snapshot[HLFS_FILE_NAME_MAX];
+
+strstart(filename, hlfs:, (const char **)filename);
+memset(snapshot, 0, sizeof(snapshot));
+memset(vdi, 0, sizeof(vdi));
+
+if (parse_vdiname(s, filename, vdi, snapshot)  0) {
+goto out;
+}
+
+HLFS_CTRL *ctrl = init_hlfs(vdi);
+if (strlen(snapshot)) {
+dprintf(snapshot:%s was open.\n, snapshot);
+ret = hlfs_open_by_snapshot(ctrl, snapshot, 1);
+} else {
+ret = hlfs_open(ctrl, 1);
+}
+g_assert(ret == 0);
+s-hctrl = ctrl;
+bs-total_sectors = ctrl-sb.max_fs_size * 1024 * 1024 / SECTOR_SIZE;
+return 0;
+out:
+if (s-hctrl) {
+hlfs_close(s-hctrl);
+deinit_hlfs(s-hctrl);
+}
+return -1;
+}
+
+static int hlbs_create(const char *filename, 

Re: [Qemu-devel] [PATCH] HLFS driver for QEMU

2013-03-18 Thread harryxiyou
On Mon, Mar 18, 2013 at 7:10 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
Hi Stefan,

 Is HLFS making releases that distros can package?  I don't see packages
 in Debian or Fedora.

We will make packages for Debian and Fedora.


 Block drivers in qemu.git should have active and sustainable communities
 behind them.

Cloudxy is an active and sustainable community, which HLFS is just a
sub-project of it.

  For example, if this is a research project where the team
 will move on within a year, then it may not be appropriate to merge the
 code into QEMU.

HLFS have been developed for two years.

 Can you share some background on the HLFS community and
 where the project is heading?

See http://code.google.com/p/cloudxy/ (Our Main Website) for
details (in Chinese).
See http://code.google.com/p/cloudxy/wiki/WHAT_IS_CLOUDXY for
details.(in English)


 ---
  block/Makefile.objs |2 +-
  block/hlfs.c|  515 
 +++
  configure   |   51 +
  3 files changed, 567 insertions(+), 1 deletion(-)
  create mode 100644 block/hlfs.c

 diff --git a/block/Makefile.objs b/block/Makefile.objs
 index c067f38..723c7a5 100644
 --- a/block/Makefile.objs
 +++ b/block/Makefile.objs
 @@ -8,7 +8,7 @@ block-obj-$(CONFIG_POSIX) += raw-posix.o
  block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o

  ifeq ($(CONFIG_POSIX),y)
 -block-obj-y += nbd.o sheepdog.o
 +block-obj-y += nbd.o sheepdog.o hlfs.o

 Missing CONFIG_HLFS to compile out hlfs.o.

I changed this according to sheepdog. I cannot see any CONFIG_SHEEPDOG.

[...]

 +echo 999 CLIBS is $CLIBS\n
 +echo 999 CFLAGS is $CFLAGS\n

 Debugging?

I will fix these problems you said. Thanks very much.



-- 
Thanks
Harry Wei



[Qemu-devel] [PATCH] HLFS driver for QEMU

2013-03-16 Thread harryxiyou
From: Harry Wei harryxi...@gmail.com

HLFS is HDFS-based(Hadoop Distributed File System) Log-Structured File
System. Actually, HLFS, currently, is not a FS but a block-storage system,
which we simplify LFS to fit block-level storage. So you could also call HLFS
as HLBS (HDFS-based Log-Structured Block-storage System).  HLFS has
two mode, which are local mode and HDFS mode. HDFS is once write and
many read so HLFS realize LBS(Log-Structured Block-storage System) to
achieve reading and writing randomly. LBS is based on LFS's basic theories
but is different from LFS, which LBS fits block storage better. See
http://code.google.com/p/cloudxy/wiki/WHAT_IS_CLOUDXY
about HLFS in details.

Currently, HLFS support following features:

1, Portions of POSIX --- Just realize some interfaces VM image need.
2, Randomly Read/Write.
3, Large file storage (TB).
4, Support snapshots(Linear snapshots and tree snapshots), Clone,
Block compression, Cache, etc.
5, A copy of the data more.
6, Cluster system can dynamic expand.
...

Signed-off-by: Harry Wei harryxi...@gmail.com

---
 block/Makefile.objs |2 +-
 block/hlfs.c|  515 +++
 configure   |   51 +
 3 files changed, 567 insertions(+), 1 deletion(-)
 create mode 100644 block/hlfs.c

diff --git a/block/Makefile.objs b/block/Makefile.objs
index c067f38..723c7a5 100644
--- a/block/Makefile.objs
+++ b/block/Makefile.objs
@@ -8,7 +8,7 @@ block-obj-$(CONFIG_POSIX) += raw-posix.o
 block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
 
 ifeq ($(CONFIG_POSIX),y)
-block-obj-y += nbd.o sheepdog.o
+block-obj-y += nbd.o sheepdog.o hlfs.o
 block-obj-$(CONFIG_LIBISCSI) += iscsi.o
 block-obj-$(CONFIG_CURL) += curl.o
 block-obj-$(CONFIG_RBD) += rbd.o
diff --git a/block/hlfs.c b/block/hlfs.c
new file mode 100644
index 000..331feae
--- /dev/null
+++ b/block/hlfs.c
@@ -0,0 +1,514 @@
+/*
+ * Block driver for HLFS(HDFS-based Log-structured File System)
+ *
+ * Copyright (c) 2013, Kang Hua kanghua...@gmail.com
+ * Copyright (c) 2013, Wang Sen kelvin.x...@gmail.com
+ * Copyright (c) 2013, Harry Wei harryxi...@gmail.com
+ *
+ * This program is free software. You can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * Reference:
+ * http://code.google.com/p/cloudxy
+ */
+
+#include qemu-common.h
+#include qemu/error-report.h
+#include qemu/sockets.h
+#include block/block_int.h
+#include qemu/bitops.h
+#include api/hlfs.h
+#include storage_helper.h
+#include comm_define.h
+#include snapshot_helper.h
+#include address.h
+
+#define DEBUG_HLBS
+#undef dprintf
+#ifdef DEBUG_HLBS
+#define dprintf(fmt, args...) \
+do {\
+fprintf(stdout, %s %d:  fmt, __func__, __LINE__, ##args); \
+} while (0)
+#else
+#define dprintf(fmt, args...)
+#endif
+
+#define HLBS_MAX_VDI_SIZE (8192ULL*8192ULL*8192ULL*8192ULL)
+#define SECTOR_SIZE 512
+
+typedef struct BDRVHLBSState {
+struct hlfs_ctrl *hctrl;
+char *snapshot;
+char *uri;
+} BDRVHLBSState;
+
+/*
+ * Parse a filename.
+ *
+ * file name format must be one of the following:
+ *1. [vdiname]
+ *2. [vdiname]%[snapshot]
+ ** vdiname format --
+ ** local:///tmp/testenv/testfs
+ ** hdfs:///tmp/testenv/testfs
+ ** hdfs://localhost:8020/tmp/testenv/testfs
+ ** hdfs://localhost/tmp/testenv/testfs
+ ** hdfs://192.168.0.1:8020/tmp/testenv/testfs
+ */
+
+static int parse_vdiname(BDRVHLBSState *s, const char *filename, char *vdi,
+char *snapshot)
+{
+if (!filename) {
+return -1;
+}
+
+gchar **v = g_strsplit(filename, %, 2);
+if (g_strv_length(v) == 1) {
+strcpy(vdi, v[0]);
+s-uri = g_strdup(vdi);
+} else if (g_strv_length(v) == 2) {
+strcpy(vdi, v[0]);
+strcpy(snapshot, v[1]);
+s-uri = g_strdup(vdi);
+s-snapshot = g_strdup(snapshot);
+} else {
+goto out;
+}
+
+return 0;
+out:
+g_strfreev(v);
+return -1;
+}
+
+static int hlbs_open(BlockDriverState *bs, const char *filename, int flags)
+{
+int ret = 0;
+BDRVHLBSState *s = bs-opaque;
+char vdi[256];
+char snapshot[HLFS_FILE_NAME_MAX];
+
+strstart(filename, hlfs:, (const char **)filename);
+memset(snapshot, 0, sizeof(snapshot));
+memset(vdi, 0, sizeof(vdi));
+
+if (parse_vdiname(s, filename, vdi, snapshot)  0) {
+goto out;
+}
+
+HLFS_CTRL *ctrl = init_hlfs(vdi);
+if (strlen(snapshot)) {
+dprintf(snapshot:%s was open.\n, snapshot);
+ret = hlfs_open_by_snapshot(ctrl, snapshot, 1);
+} else {
+ret = hlfs_open(ctrl, 1);
+}
+g_assert(ret == 0);
+s-hctrl = ctrl;
+bs-total_sectors = ctrl-sb.max_fs_size * 1024 * 1024 / SECTOR_SIZE;
+return 0;
+out:
+if (s-hctrl) {
+hlfs_close(s-hctrl);
+deinit_hlfs(s-hctrl);
+}
+return -1;
+}
+
+static int hlbs_create(const char *filename, 

Re: [Qemu-devel] [cloudxy] Re: [RFC]HLFS driver for QEMU

2013-03-12 Thread harryxiyou
On Tue, Mar 12, 2013 at 11:16 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
 On Tue, Mar 12, 2013 at 09:47:57PM +0800, harryxiyou wrote:
 Could anyone give me some suggestions to submit HLFS driver patches for
 QEMU to our QEMU community. Thanks a lot in advance ;-)

 http://wiki.qemu.org/Contribute/SubmitAPatch


Thanks, i will send our driver patches according to what SubmitPatch
says.



-- 
Thanks
Harry Wei



Re: [Qemu-devel] Google Summer of Code 2013 ideas wiki open

2013-02-14 Thread harryxiyou
On Tue, Feb 12, 2013 at 5:21 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
 On Thu, Feb 7, 2013 at 4:19 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
 I believe Google will announce GSoC again this year (there is
 no guarantee though) and I have created the wiki page so we can begin
 organizing project ideas that students can choose from.

 Google Summer of Code 2013 has just been announced!

 http://google-opensource.blogspot.de/2013/02/flip-bits-not-burgers-google-summer-of.html

 Some project ideas have already been discussed on IRC or private
 emails.  Please go ahead and put them on the project ideas wiki page:

 http://wiki.qemu.org/Google_Summer_of_Code_2013


I am a senior student and wanna do some jobs about storage in Libvirt
in GSOC 2013.
I wonder whether Libvirt and QEMU will join GSOC 2013 together. If
true, i will focus
on http://wiki.qemu.org/Google_Summer_of_Code_2013 and add myself introductions
to QEMU links said by Stefan Hajnoczi. Could anyone give me some suggestions?
Thanks in advance.



-- 
Thanks
Harry Wei



Re: [Qemu-devel] Google Summer of Code 2013 ideas wiki open

2013-02-14 Thread harryxiyou
On Thu, Feb 14, 2013 at 11:15 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
[...]
 Hi Harry,
Hi Stefan,

 Thanks for your interest.  You can begin thinking about ideas but
 please keep in mind that we are still in the very early stages of GSoC
 preparation.

 Google will publish the list of accepted organizations on April 8th.
 Then there is a period of over 3 weeks to discuss your project idea
 with the organization.

 In the meantime, the best thing to do is to get familiar with the code
 bases and see if you can find/fix a bug.  Contributing patches is a
 great way to get noticed.

 There is always a chance that QEMU and/or libvirt may not be among the
 list of accepted organizations, so don't put all your eggs in one
 basket :).


Thanks for your suggestions.


-- 
Thanks
Harry Wei



[Qemu-devel] Sigsegv error when create JVM, called by bdrv_create in block.c

2013-02-07 Thread harryxiyou
HI all :
 I write a new block driver based on hdfs of hadoop, it need to
connect to hdfs when create step (called by bdrv_create in block.c) by
use libhdfs ( hdfs c++ api) 。
 libhdfs use jni api to will create jvm . The problem is  —— it
will cause sigsegv error when create jvm .

#0  0x764b103a in frame::frame(long*, long*) () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#1  0x76685aef in
java_lang_Throwable::fill_in_stack_trace(Handle, Thread*) () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#2  0x76685b99 in
java_lang_Throwable::fill_in_stack_trace(Handle) () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#3  0x765d5db7 in
Exceptions::throw_stack_overflow_exception(Thread*, char const*, int)
() from /usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#4  0x7667f501 in JavaCalls::call_helper(JavaValue*,
methodHandle*, JavaCallArguments*, Thread*) () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#5  0x768af629 in os::os_exception_wrapper(void
(*)(JavaValue*, methodHandle*, JavaCallArguments*, Thread*),
JavaValue*, methodHandle*, JavaCallArguments*, Thread*) () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#6  0x7667f3d5 in JavaCalls::call(JavaValue*, methodHandle,
JavaCallArguments*, Thread*) () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#7  0x7664efd5 in
instanceKlass::call_class_initializer_impl(instanceKlassHandle,
Thread*) () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#8  0x7664d558 in
instanceKlass::initialize_impl(instanceKlassHandle, Thread*) () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#9  0x7664c91a in instanceKlass::initialize(Thread*) () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#10 0x7664d81c in
instanceKlass::initialize_impl(instanceKlassHandle, Thread*) () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#11 0x7664c91a in instanceKlass::initialize(Thread*) () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#12 0x769bdd04 in Threads::create_vm(JavaVMInitArgs*, bool*)
() from /usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#13 0x766ba7b0 in JNI_CreateJavaVM () from
/usr/lib/jvm/java-6-sun/jre/lib/amd64/server/libjvm.so
#14 0x76db0af2 in getJNIEnv () at
/home/kanghua/hadoop-0.20.2/src/c++/libhdfs/hdfsJniHelper.c:455
#15 0x76db048c in hdfsConnectAsUser (host=0x77f64f60 ,
port=0, user=0x0) at
/home/kanghua/hadoop-0.20.2/src/c++/libhdfs/hdfs.c:199
#16 0x77201d8f in hdfs_connect (storage=0x78612000,
uri=0x77f65e70 hdfs:///tmp/testenv/testfs8) at
/home/kanghua/workshop3.bak/hlfs/src/backend/hdfs_storage.c:59
#17 0x7720b738 in init_storage_handler (uri=0x77f65e70
hdfs:///tmp/testenv/testfs8) at
/home/kanghua/workshop3.bak/hlfs/src/utils/storage_helper.c:68
#18 0x77f9b284 in hlbs_create (filename=value optimized out,
options=0x7860e3b0) at block/hlfs.c:286
#19 0x77f92ac7 in bdrv_create_co_entry ()
#20 0x77fb998b in coroutine_trampoline ()
#21 0x75a3ff40 in ?? () from /lib/libc.so.6
#22 0x7fffd780 in ?? ()
#23 0x in ?? ()

The above debug info show jni will be used in coroutine. I do not
familay with coroutine, however I guess if coroutine not allow to
spawn a new thread? ,or it's stack corrupte with java jvm stack ? ,
For when I modify bdrv_create out of coroutine , it can work - connect
to hdfs without any probleme.

Please tell me there are what limitions about coroutine , and how can
I use jni in roroutine to avoid sigsegv , or how to avoid  run into
coroutine mode when create driver.


Thanks a lot ..

kanghua



Re: [Qemu-devel] Sigsegv error when create JVM, called by bdrv_create in block.c

2013-02-07 Thread harryxiyou
On Fri, Feb 8, 2013 at 12:11 AM, Peter Maydell peter.mayd...@linaro.org wrote:
 On 7 February 2013 16:08, Stefan Hajnoczi stefa...@gmail.com wrote:
 On Thu, Feb 7, 2013 at 3:12 PM, harryxiyou harryxi...@gmail.com wrote:
 The above debug info show jni will be used in coroutine. I do not
 familay with coroutine, however I guess if coroutine not allow to
 spawn a new thread? ,or it's stack corrupte with java jvm stack ? ,
 For when I modify bdrv_create out of coroutine , it can work - connect
 to hdfs without any probleme.

 Coroutines are allowed to create new threads.

 Coroutine stacks do not grow automatically but they have 1 MB, which
 should be enough.

 There are a number of other things that could be wrong here -
 missing/incorrect thread synchronization, the shared libraries you are
 importing messing with the environment (signal masks, signal
 handlers), etc.

 I would suggest that trying to start an entire JVM from within
 QEMU is likely to be brittle at best even if you do get it to
 work, because you've got two big complicated codebases which
 will probably squabble over threads, signal handling, etc etc,
 because they weren't intended to work as part of some other
 large process.


Hi:Peter,Stefan
Can I modify block.c , like below ?  - make bdrv_create_co_entry
called out of coroutine ! . Or who can  give me  a elegant way  to
call jni function in coroutine!

/***
int bdrv_create(BlockDriver *drv, const char* filename,
QEMUOptionParameter *options)
{
int ret;
Coroutine *co;
CreateCo cco = {
.drv = drv,
.filename = g_strdup(filename),
.options = options,
.ret = NOT_DONE,
};

if (!drv-bdrv_create) {
ret = -ENOTSUP;
goto out;
}
#if 1
bdrv_create_co_entry(cco);
#else
if (qemu_in_coroutine()) {
/* Fast-path if already in coroutine context */
bdrv_create_co_entry(cco);
} else {
co = qemu_coroutine_create(bdrv_create_co_entry);
qemu_coroutine_enter(co, cco);
while (cco.ret == NOT_DONE) {
qemu_aio_wait();
}
}
#endif
ret = cco.ret;

out:
g_free(cco.filename);
return ret;
}
/*



-- 
Thanks
Kang Hua



Re: [Qemu-devel] [QEMU]Installed qemu-img and qemu/qemu-img have different size

2013-02-02 Thread harryxiyou
On Sat, Feb 2, 2013 at 5:02 AM, Brian Jackson i...@theiggy.com wrote:
[...]
 It probably gets stripped during the install process. Check 'file'
 output on both and see.

This is a stupid mistake, which the latter one is *NOT* installed by
the first one. Jackson, thanks for your suggestions ;-)

-- 
Thanks
Harry Wei



[Qemu-devel] [QEMU]Installed qemu-img and qemu/qemu-img have different size

2013-02-01 Thread harryxiyou
Hi all,

I did following operations to install QEMU to my PC.

1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3, git reset --hard v1.3.0
4, ./configure
5, make
6, sudo make install

After step 6, i did following jobs.

a: see size of qemu-img bin in qemu package
jiawei@jiawei-laptop:~/workshop4/qemu$ ls -alh ./qemu-img
-rwxr-xr-x 1 jiawei jiawei 586K 2013-02-01 11:24 ./qemu-img

b: see size of qemu-img bin in system
jiawei@jiawei-laptop:/usr/bin$ ls -alh ./qemu-img
-rwxr-xr-x 1 root root 219K 2013-01-16 00:33 ./qemu-img

Why do a and b have different size? Could anyone give me some
suggestions? Thanks in advance ;-)

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [Libvirt][QEMU][HLFS]How to test HLFS driver for Libvirt

2013-01-29 Thread harryxiyou
On Tue, Jan 29, 2013 at 12:05 AM, MORITA Kazutaka
morita.kazut...@gmail.com wrote:
 At Mon, 28 Jan 2013 23:43:04 +0800,
 harryxiyou wrote:

 Following test is Libvirt v0.8.6, which you just add Sheepdog volume
 patch to Libvirt.

 The version doesn't support Sheepdog storage pool and volume.  Please
 note that my patch you are looking at doesn't implements a sheepdog
 storage driver.  It only introduced a new schema (network disk) to the
 domain XML format.  Sheepdog storage driver is introduced by other
 developers and available since 0.9.13.


Now, in the latest Libvirt git tree, i cannot find your

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [Libvirt][QEMU][HLFS]How to test HLFS driver for Libvirt

2013-01-29 Thread harryxiyou
On Tue, Jan 29, 2013 at 8:55 PM, harryxiyou harryxi...@gmail.com wrote:
 On Tue, Jan 29, 2013 at 12:05 AM, MORITA Kazutaka
 morita.kazut...@gmail.com wrote:
 At Mon, 28 Jan 2013 23:43:04 +0800,
 harryxiyou wrote:

 Following test is Libvirt v0.8.6, which you just add Sheepdog volume
 patch to Libvirt.

 The version doesn't support Sheepdog storage pool and volume.  Please
 note that my patch you are looking at doesn't implements a sheepdog
 storage driver.  It only introduced a new schema (network disk) to the
 domain XML format.  Sheepdog storage driver is introduced by other
 developers and available since 0.9.13.


 Now, in the latest Libvirt git tree, i cannot find your


Please Ignore last email, it was a misoperation, sorry.

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [Libvirt][QEMU][HLFS]How to test HLFS driver for Libvirt

2013-01-28 Thread harryxiyou
On Mon, Jan 28, 2013 at 10:44 PM, MORITA Kazutaka
morita.kazut...@gmail.com wrote:
[...]
 I'm not familiar with HLFS at all.  Sheepdog examples I explained to
 you in another mail may help you, but I cannot give you any other
 suggestions about HLFS.


Thanks for your reminder, i have found this email.
Therefore, could you please explain how Sheepdog driver for Libvirt
communicate with QEMU?

Thanks for your always help ;-)

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [Libvirt][QEMU][HLFS]How to test HLFS driver for Libvirt

2013-01-28 Thread harryxiyou
On Mon, Jan 28, 2013 at 11:06 PM, harryxiyou harryxi...@gmail.com wrote:
 On Mon, Jan 28, 2013 at 10:44 PM, MORITA Kazutaka
 morita.kazut...@gmail.com wrote:
 [...]
 I'm not familiar with HLFS at all.  Sheepdog examples I explained to
 you in another mail may help you, but I cannot give you any other
 suggestions about HLFS.


 Thanks for your reminder, i have found this email.
 Therefore, could you please explain how Sheepdog driver for Libvirt
 communicate with QEMU?

 Thanks for your always help ;-)


After i did the test way you said, i got following errors.

$ ./virsh vol-create -f sheepdog.xml
error: Failed to reconnect to the hypervisor
error: invalid connection
error: internal error Unable to locate libvirtd daemon in $PATH

$ cat sheepdog.xml
volume
namemyvol/name
keysheep/myvol/key
source
/source
capacity unit='bytes'53687091200/capacity
allocation unit='bytes'53687091200/allocation
target
pathsheepdog:myvol/path
format type='unknown'/
permissions
mode00/mode
owner0/owner
group0/group
/permissions
/target

Any suggestions? Thanks in advance ;-)


-- 
Thanks
Harry Wei



Re: [Qemu-devel] [Libvirt][QEMU][HLFS]How to test HLFS driver for Libvirt

2013-01-28 Thread harryxiyou
On Mon, Jan 28, 2013 at 11:23 PM, harryxiyou harryxi...@gmail.com wrote:
 On Mon, Jan 28, 2013 at 11:06 PM, harryxiyou harryxi...@gmail.com wrote:
 On Mon, Jan 28, 2013 at 10:44 PM, MORITA Kazutaka
 morita.kazut...@gmail.com wrote:
 [...]
 I'm not familiar with HLFS at all.  Sheepdog examples I explained to
 you in another mail may help you, but I cannot give you any other
 suggestions about HLFS.


 Thanks for your reminder, i have found this email.
 Therefore, could you please explain how Sheepdog driver for Libvirt
 communicate with QEMU?

 Thanks for your always help ;-)


 After i did the test way you said, i got following errors.

 $ ./virsh vol-create -f sheepdog.xml
 error: Failed to reconnect to the hypervisor
 error: invalid connection
 error: internal error Unable to locate libvirtd daemon in $PATH

 $ cat sheepdog.xml
 volume
 namemyvol/name
 keysheep/myvol/key
 source
 /source
 capacity unit='bytes'53687091200/capacity
 allocation unit='bytes'53687091200/allocation
 target
 pathsheepdog:myvol/path
 format type='unknown'/
 permissions
 mode00/mode
 owner0/owner
 group0/group
 /permissions
 /target

 Any suggestions? Thanks in advance ;-)


The version of Libvirt for Up test is 0.9.13.

Following test is Libvirt v0.8.6, which you just add Sheepdog volume
patch to Libvirt.

1, $ ./libvirtd
2, $ ./virsh vol-create -f sheepdog.xml
error: achieve pool  '-f' failure
error: can not find storage pool: no pool with matching name '-f'

Any suggestions? Thanks in advance ;-)

-- 
Thanks
Harry Wei



[Qemu-devel] [Libvirt][QEMU]The relationships between Libvirt and QEMU indetails

2013-01-27 Thread harryxiyou
Hi Daniel and other developers,

We have programed a HLFS(HDFS based Log-Structured FileSystem)
driver for QEMU, which you can see it here.
http://cloudxy.googlecode.com/svn/trunk/hlfs/patches/hlfs_driver_for_qemu_1.3.0.patch
And i have tested in QEMU environment, which works well for us.

Now, we wanna add a HLFS driver(Volume storage) for Libvirt(The same
as Sheepdog)
but i am not sure how to test HLFS driver for Libvirt. How could i know
if the HLFS driver for Libvirt works well?

I guess libvirt uses qemu interfaces(QEMU way) to finish the real jobs like
create/delete volumes. I wanna use command 'virsh' to finish them, which
the logical procedure like this:
virsh -- libvirt client -- libvirt server -- QEMU
Therefore, 'virsh' cannot work for me. I do not know how to test it with command
'virsh' solely. Or maybe i must build QEMU environment for this test, right?
In brief, i am not clear about the relationships between Libvirt and QEMU.
How do they(QEMU and Libvirt) communicate with each other?
How could i test create/delete volumes of Sheepdog/HLFS only with Libvirt
environment?  Could anyone give me some suggestions? Thanks in advance;-)

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [Openstack][Sheepdog][Libvirt][Qemu]Add a new block storage driver by Libvirt/Qemu way for Openstack

2013-01-25 Thread harryxiyou
On Sat, Jan 19, 2013 at 10:04 PM, MORITA Kazutaka
morita.kazut...@gmail.com wrote:
[...]
 If you do the above work, I think you can use your file system with
 OpenStack.

 But I suggest doing them step by step.  If your file system is not
 supported in QEMU, I think libvirt won't support it.  If libvirt
 doesn't support it, OpenStack shouldn't support it too.


Hi Mortita,

If i just wanna test sheepdog driver in Libvirt separately(without
QEMU and Openstack),
how should i do this job. You can suppose i wanna test if sheepdog
driver, you add, is
working well in Libvirt. Could you please give me some suggestions?
Thanks in advance ;-)

PS: Your patch for Libvirt can be seen here:
http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=036ad5052b43fe9f0d197e89fd16715950408e1d



-- 
Thanks
Harry Wei



[Qemu-devel] [QEMU]How to configure qemu/configure file correctly?

2013-01-23 Thread harryxiyou
Hi all,

We wanna add a block storage driver named HLFS to QEMU, which
now have to configure qemu/configure file to add following stuffs.

[...]
2828 ##
2829 # hlfs probe
2830 echo Entering HLFS probe...;
2831 sleep 2;
2832 if test $hlfs != no ; then
2833 cat  $TMPC EOF
2834 #include stdio.h
2835 #include api/hlfs.h
2836 int main(void) {
2837 return 0;
2838 }
2839 EOF
2831 sleep 2;
2832 if test $hlfs != no ; then
2833 cat  $TMPC EOF
2834 #include stdio.h
2835 #include api/hlfs.h
2836 int main(void) {
2837 return 0;
2838 }
2839 EOF
2840 echo come here;
2841 GLIB_DIR1_INC=/usr/lib/glib-2.0/include
2842 GLIB_DIR2_INC=/usr/include/glib-2.0
2843 HLFS_DIR=/home/jiawei/workshop3/hlfs
2844 LOG4C_DIR=$HLFS_DIR/3part/log
2845 HDFS_DIR=$HLFS_DIR/3part/hadoop
2846 JVM_DIR=/usr/lib/jvm/java-6-openjdk
2847
2848 if [ `getconf LONG_BIT` -eq 64 ];then
2849 CLIBS=-L$LOG4C_DIR/lib64
2850 CLIBS=-L$HDFS_DIR/lib64 $CLIBS
2851 CLIBS=-L$HLFS_DIR/output/lib64  $CLIBS
2852 CLIBS=-L$JVM_DIR/jre/lib/amd64/server/ $CLIBS
2853 else if [ `getconf LONG_BIT` -eq 32 ];then
2854 CLIBS=-L$LOG4C_DIR/lib32
2855 CLIBS=-L$HDFS_DIR/lib32 $CLIBS
2856 CLIBS=-L$JVM_DIR/jre/lib/i386/server $CLIBS
2857 CLIBS=-L$HLFS_DIR/output/lib32  $CLIBS
2858 fi
2859 fi
2860 echo CLIBS is;
2861 echo $CLIBS;
2862 echo CLIBS end...;
2863 echo $libs
2864 sleep 2;
2865 CFLAGS=-I$GLIB_DIR1_INC
2866 CFLAGS=-I$GLIB_DIR2_INC $CFLAGS
2867 CFLAGS=-I$HLFS_DIR/src/include $CFLAGS
2868 CFLAGS=-I$LOG4C_DIR/include $CFLAGS
2869 echo CFLAGS is;
2870 echo $CFLAGS;
2871 echo CFLAGS end...;
2872 sleep 2;
2873 #   removed '-llog4c' '-lhdfs'
2874 hlfs_libs=$CLIBS -lhlfs -lglib-2.0 -lgthread-2.0 -lrt
-llog4c -lhdfs -ljvm
2875 if compile_prog $CFLAGS $hlfs_libs ; then
2876 echo run branch 1;
2877 hlfs=yes
2878 libs_tools=$hlfs_libs $libs_tools
2879 libs_softmmu=$hlfs_libs $libs_softmmu
2880 #   libs_softmmu=$libs_softmmu $hlfs_libs
2881 else
2882 if test $hlfs = yes ; then
2883 echo run branch 2;
2884 feature_not_found hlfs block device
2885 fi
2886 hlfs=no
2887 fi
2888 fi
2889 echo ===;
2890 echo $libs_softmmu;
2891 echo ===;
2892 echo $libs_tools;
2893 echo ===;
2894 echo Leave HLFS ...;
2895 sleep 2
2896
2897 ##
[...]


Every developer/user has their own
GLIB_DIR1_INC=/usr/lib/glib-2.0/include
GLIB_DIR2_INC=/usr/include/glib-2.0
HLFS_DIR=/home/jiawei/workshop3/hlfs
LOG4C_DIR=$HLFS_DIR/3part/log
HDFS_DIR=$HLFS_DIR/3part/hadoop
JVM_DIR=/usr/lib/jvm/java-6-openjdk
directory for QEMU compiling so they have to configure qemu/configure file
manually. However, i think it is not convenient(standard) for developer/user to
do them(Manually configure qemu/configure file). I also find other
driver in QEMU
use pkg-config to do this job but i am not clear about this way. Could anyone
give me some suggestions? Thanks in advance.


-- 
Thanks
Harry Wei



[Qemu-devel] [QEMU]Add new entries for qemu/configure questions

2013-01-22 Thread harryxiyou
Hi all,

We add new entries for qemu/configure(QEMU v1.3.0), which
can locate our libraries and header files to compile our driver
for QEMU. The new entries in qemu/configure are like following.

[...]
2828 ##
2829 # hlfs probe
2830 echo Entering HLFS probe...;
2831 sleep 2;
2832 if test $hlfs != no ; then
2833 cat  $TMPC EOF
2834 #include stdio.h
2835 #include api/hlfs.h
2836 int main(void) {
2837 return 0;
2838 }
2839 EOF
2840 echo come here;
2841 GLIB_DIR1_INC=/usr/lib/glib-2.0/include
2842 GLIB_DIR2_INC=/usr/include/glib-2.0
2843 HLFS_DIR=/home/jiawei/workshop3/hlfs
2844 LOG4C_DIR=$HLFS_DIR/3part/log
2845 HDFS_DIR=$HLFS_DIR/3part/hadoop
2832 if test $hlfs != no ; then
2833 cat  $TMPC EOF
2834 #include stdio.h
2835 #include api/hlfs.h
2836 int main(void) {
2837 return 0;
2838 }
2839 EOF
2840 echo come here;
2841 GLIB_DIR1_INC=/usr/lib/glib-2.0/include
2842 GLIB_DIR2_INC=/usr/include/glib-2.0
2843 HLFS_DIR=/home/jiawei/workshop3/hlfs
2844 LOG4C_DIR=$HLFS_DIR/3part/log
2845 HDFS_DIR=$HLFS_DIR/3part/hadoop
2846 JVM_DIR=/usr/lib/jvm/java-6-openjdk
2847
2848 if [ `getconf LONG_BIT` -eq 64 ];then
2849 CLIBS=-L$LOG4C_DIR/lib64
2850 CLIBS=-L$HDFS_DIR/lib64 $CLIBS
2851 CLIBS=-L$HLFS_DIR/output/lib64  $CLIBS
2852 CLIBS=-L$JVM_DIR/jre/lib/amd64/server/ $CLIBS
2853 else if [ `getconf LONG_BIT` -eq 32 ];then
2854 CLIBS=-L$LOG4C_DIR/lib32
2855 CLIBS=-L$HDFS_DIR/lib32 $CLIBS
2856 CLIBS=-L$JVM_DIR/jre/lib/i386/server $CLIBS
2857 CLIBS=-L$HLFS_DIR/output/lib32  $CLIBS
2858 fi
2859 fi
2860 echo CLIBS is;
2861 echo $CLIBS;
2862 echo CLIBS end...;
2863 echo $libs
2864 sleep 2;
2865 CFLAGS=-I$GLIB_DIR1_INC
2866 CFLAGS=-I$GLIB_DIR2_INC $CFLAGS
2867 CFLAGS=-I$HLFS_DIR/src/include $CFLAGS
2868 CFLAGS=-I$LOG4C_DIR/include $CFLAGS
2869 echo CFLAGS is;
2870 echo $CFLAGS;
2871 echo CFLAGS end...;
2872 sleep 2;
2873 #   removed '-llog4c' '-lhdfs'
2874 hlfs_libs=$CLIBS -lhlfs -lglib-2.0 -lgthread-2.0 -lrt
-llog4c -lhdfs -ljvm
2875 if compile_prog $CFLAGS $hlfs_libs ; then
2876 echo run branch 1;
2877 hlfs=yes
2878 libs_tools=$libs_tools $hlfs_libs
2879 libs_softmmu=$libs_softmmu $hlfs_libs
2880 #   libs_softmmu=$libs_softmmu $hlfs_libs
2881 else
2882 if test $hlfs = yes ; then
2883 echo run branch 2;
2884 feature_not_found hlfs block device
2885 fi
2886 hlfs=no
2887 fi
2888 fi
2889 echo $hlfs_libs;
2890 echo Leave HLFS ...;
2891 sleep 2
2892
2893 ##
[...]

Unfortunately, it doesn't work. After i execute following commands,
it shows me libraries which are not pointed to what i set in file
qemu/configure.

1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3, git reset --hard v1.3.0
4. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./
5. git apply hlfs_driver_for_qemu.patch
6, Modify the dead path
7, ./configure
8, make
9, ldd ./qemu-img

Note:  1, Step 4 just move the patch to qemu dir for later patching.
  2, Step 6 just set the libraries in my OS.

After up commands, i get

$ ldd ./qemu-img
linux-gate.so.1 =  (0x00355000)
librt.so.1 = /lib/tls/i686/cmov/librt.so.1 (0x003ca000)
libgthread-2.0.so.0 = /usr/lib/libgthread-2.0.so.0 (0x007cf000)
libglib-2.0.so.0 = /usr/lib/libglib-2.0.so.0 (0x0011)
libz.so.1 = /lib/libz.so.1 (0x002b3000)
libhlfs.so = /usr/lib/libhlfs.so (0x0021c000)
liblog4c.so.3 = /usr/lib/liblog4c.so.3 (0x00722000)
libhdfs.so.0 = /usr/lib/libhdfs.so.0 (0x00297000)
libjvm.so = /usr/lib/libjvm.so (0x00daa000)
libpthread.so.0 = /lib/tls/i686/cmov/libpthread.so.0 (0x002c8000)
libc.so.6 = /lib/tls/i686/cmov/libc.so.6 (0x0041b000)
/lib/ld-linux.so.2 (0x0074f000)
libsnappy.so.1 =
/home/jiawei/workshop3/hlfs/build/../3part/snappy/lib32/libsnappy.so.1
(0x00364000)
libexpat.so.1 = /lib/libexpat.so.1 (0x002e1000)
libdl.so.2 = /lib/tls/i686/cmov/libdl.so.2 (0x002a1000)
libstdc++.so.6 = /usr/lib/libstdc++.so.6 (0x0057a000)
libm.so.6 = /lib/tls/i686/cmov/libm.so.6 (0x00309000)
libgcc_s.so.1 = /lib/libgcc_s.so.1 (0x0032f000)

I set libhlfs.so point to /home/jiawei/workshop3/hlfs/output/lib32 but,
in fact, it points to /usr/lib. The same as libhdfs.so and liblog4c.so.
I guess i have configured qemu/configure not correctly but i am not
clear about how i should configure? Could anyone give me some
suggestions? Thanks in advance.

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [QEMU]Install QEMU question

2013-01-22 Thread harryxiyou
On Tue, Jan 22, 2013 at 6:14 PM, Stefan Hajnoczi stefa...@redhat.com wrote:
[...]
 The fix is now in qemu.git/master.

Ok, thanks for your job.

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [QEMU]Add new entries for qemu/configure questions

2013-01-22 Thread harryxiyou
On Tue, Jan 22, 2013 at 4:43 PM, harryxiyou harryxi...@gmail.com wrote:
 Hi all,

 We add new entries for qemu/configure(QEMU v1.3.0), which
 can locate our libraries and header files to compile our driver
 for QEMU. The new entries in qemu/configure are like following.

 [...]
 2828 ##
 2829 # hlfs probe
 2830 echo Entering HLFS probe...;
 2831 sleep 2;
 2832 if test $hlfs != no ; then
 2833 cat  $TMPC EOF
 2834 #include stdio.h
 2835 #include api/hlfs.h
 2836 int main(void) {
 2837 return 0;
 2838 }
 2839 EOF
 2840 echo come here;
 2841 GLIB_DIR1_INC=/usr/lib/glib-2.0/include
 2842 GLIB_DIR2_INC=/usr/include/glib-2.0
 2843 HLFS_DIR=/home/jiawei/workshop3/hlfs
 2844 LOG4C_DIR=$HLFS_DIR/3part/log
 2845 HDFS_DIR=$HLFS_DIR/3part/hadoop
 2832 if test $hlfs != no ; then
 2833 cat  $TMPC EOF
 2834 #include stdio.h
 2835 #include api/hlfs.h
 2836 int main(void) {
 2837 return 0;
 2838 }
 2839 EOF
 2840 echo come here;
 2841 GLIB_DIR1_INC=/usr/lib/glib-2.0/include
 2842 GLIB_DIR2_INC=/usr/include/glib-2.0
 2843 HLFS_DIR=/home/jiawei/workshop3/hlfs
 2844 LOG4C_DIR=$HLFS_DIR/3part/log
 2845 HDFS_DIR=$HLFS_DIR/3part/hadoop
 2846 JVM_DIR=/usr/lib/jvm/java-6-openjdk
 2847
 2848 if [ `getconf LONG_BIT` -eq 64 ];then
 2849 CLIBS=-L$LOG4C_DIR/lib64
 2850 CLIBS=-L$HDFS_DIR/lib64 $CLIBS
 2851 CLIBS=-L$HLFS_DIR/output/lib64  $CLIBS
 2852 CLIBS=-L$JVM_DIR/jre/lib/amd64/server/ $CLIBS
 2853 else if [ `getconf LONG_BIT` -eq 32 ];then
 2854 CLIBS=-L$LOG4C_DIR/lib32
 2855 CLIBS=-L$HDFS_DIR/lib32 $CLIBS
 2856 CLIBS=-L$JVM_DIR/jre/lib/i386/server $CLIBS
 2857 CLIBS=-L$HLFS_DIR/output/lib32  $CLIBS
 2858 fi
 2859 fi
 2860 echo CLIBS is;
 2861 echo $CLIBS;
 2862 echo CLIBS end...;
 2863 echo $libs
 2864 sleep 2;
 2865 CFLAGS=-I$GLIB_DIR1_INC
 2866 CFLAGS=-I$GLIB_DIR2_INC $CFLAGS
 2867 CFLAGS=-I$HLFS_DIR/src/include $CFLAGS
 2868 CFLAGS=-I$LOG4C_DIR/include $CFLAGS
 2869 echo CFLAGS is;
 2870 echo $CFLAGS;
 2871 echo CFLAGS end...;
 2872 sleep 2;
 2873 #   removed '-llog4c' '-lhdfs'
 2874 hlfs_libs=$CLIBS -lhlfs -lglib-2.0 -lgthread-2.0 -lrt
 -llog4c -lhdfs -ljvm
 2875 if compile_prog $CFLAGS $hlfs_libs ; then
 2876 echo run branch 1;
 2877 hlfs=yes
 2878 libs_tools=$libs_tools $hlfs_libs
 2879 libs_softmmu=$libs_softmmu $hlfs_libs
 2880 #   libs_softmmu=$libs_softmmu $hlfs_libs
 2881 else
 2882 if test $hlfs = yes ; then
 2883 echo run branch 2;
 2884 feature_not_found hlfs block device
 2885 fi
 2886 hlfs=no
 2887 fi
 2888 fi
 2889 echo $hlfs_libs;
 2890 echo Leave HLFS ...;
 2891 sleep 2
 2892
 2893 ##
 [...]

 Unfortunately, it doesn't work. After i execute following commands,
 it shows me libraries which are not pointed to what i set in file
 qemu/configure.

 1. git clone git://git.qemu.org/qemu.git
 2. cd qemu
 3, git reset --hard v1.3.0
 4. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./
 5. git apply hlfs_driver_for_qemu.patch
 6, Modify the dead path
 7, ./configure
 8, make
 9, ldd ./qemu-img

 Note:  1, Step 4 just move the patch to qemu dir for later patching.
   2, Step 6 just set the libraries in my OS.

 After up commands, i get

 $ ldd ./qemu-img
 linux-gate.so.1 =  (0x00355000)
 librt.so.1 = /lib/tls/i686/cmov/librt.so.1 (0x003ca000)
 libgthread-2.0.so.0 = /usr/lib/libgthread-2.0.so.0 (0x007cf000)
 libglib-2.0.so.0 = /usr/lib/libglib-2.0.so.0 (0x0011)
 libz.so.1 = /lib/libz.so.1 (0x002b3000)
 libhlfs.so = /usr/lib/libhlfs.so (0x0021c000)
 liblog4c.so.3 = /usr/lib/liblog4c.so.3 (0x00722000)
 libhdfs.so.0 = /usr/lib/libhdfs.so.0 (0x00297000)
 libjvm.so = /usr/lib/libjvm.so (0x00daa000)
 libpthread.so.0 = /lib/tls/i686/cmov/libpthread.so.0 (0x002c8000)
 libc.so.6 = /lib/tls/i686/cmov/libc.so.6 (0x0041b000)
 /lib/ld-linux.so.2 (0x0074f000)
 libsnappy.so.1 =
 /home/jiawei/workshop3/hlfs/build/../3part/snappy/lib32/libsnappy.so.1
 (0x00364000)
 libexpat.so.1 = /lib/libexpat.so.1 (0x002e1000)
 libdl.so.2 = /lib/tls/i686/cmov/libdl.so.2 (0x002a1000)
 libstdc++.so.6 = /usr/lib/libstdc++.so.6 (0x0057a000)
 libm.so.6 = /lib/tls/i686/cmov/libm.so.6 (0x00309000)
 libgcc_s.so.1 = /lib/libgcc_s.so.1 (0x0032f000)

 I set libhlfs.so point to /home/jiawei/workshop3/hlfs/output/lib32 but,
 in fact, it points to /usr/lib. The same as libhdfs.so and liblog4c.so.
 I guess i have configured qemu/configure not correctly but i am not
 clear about how i should configure? Could anyone give me some
 suggestions? Thanks in advance.


It seems that i confused the option '-L' of gcc command and 'ldd

[Qemu-devel] [QEMU]Where are debug logs

2013-01-22 Thread harryxiyou
Hi all,

When i debug our block driver in QEMU source codes, i cannot
find the debug logs for QEMU. I have searched /var/log/messages
and /var/log/dmesg but i cannot find debug logs.
Could anyone tell me how to find debug logs? Thanks in advance ;-)

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [QEMU]Where are debug logs

2013-01-22 Thread harryxiyou
On Wed, Jan 23, 2013 at 12:58 AM, Brendan Dolan-Gavitt
brenda...@gatech.edu wrote:
 Assuming you're using one of the -d options to qemu, they will by
 default go into /tmp/qemu.log.

How to add -d options to qemu.(Configure qemu/configure or Makefile?)

 You can also specify where to put the debug log using the -D option.

How to use -D option while compile?


-- 
Thanks
Harry Wei



Re: [Qemu-devel] [QEMU]Where are debug logs

2013-01-22 Thread harryxiyou
On Wed, Jan 23, 2013 at 2:12 AM, Brendan Dolan-Gavitt
brenda...@gatech.edu wrote:
 These are runtime options to QEMU. For example: qemu-system-x86_64 -D
 ~/qemu_debug.log -d in_asm.

Ok, i will have a try. Thanks very much.

 Or are you trying to add *new* debugging statements to QEMU? If so,
 look at the functions in qemu-log.h, and at the constants defined in
 exec.c.

I just wanna debug our block storage driver for QEMU, which has to see
logs about QEMU.

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [QEMU]Install QEMU question

2013-01-21 Thread harryxiyou
On Mon, Jan 21, 2013 at 5:07 PM, Andreas Färber afaer...@suse.de wrote:
 Hi,
Hi Andreas,


 There's a patch queued on qemu-trivial:
 http://patchwork.ozlabs.org/patch/213610/

 Would be nice to get this applied soon, it looks annoying but is not to
 worry about.


I see, thanks for your help ;-)

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [QEMU]Patch for QEMU errors

2013-01-21 Thread harryxiyou
On Tue, Jan 22, 2013 at 12:49 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
[...]
 qemu-devel is not the appropriate place to ask for help with hlfs.  The
 hlfs patch is not part of qemu.git.  Try emailing Kang Hua and Wang Sen
 directly.

Hmmm..., you are right.

 The error message indicates that you are applying the patch to a QEMU
 source tree of a different version than that which the patch was created
 against.

I think so, thanks for your help.

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [QEMU]Patch for QEMU errors

2013-01-21 Thread harryxiyou
On Tue, Jan 22, 2013 at 1:14 AM, harryxiyou harryxi...@gmail.com wrote:
 On Tue, Jan 22, 2013 at 12:49 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
 [...]
 qemu-devel is not the appropriate place to ask for help with hlfs.  The
 hlfs patch is not part of qemu.git.  Try emailing Kang Hua and Wang Sen
 directly.

 Hmmm..., you are right.

After HLFS has wonderful performances, we will make another standard
patch according to latest QEMU version to QEMU community for merging.


 The error message indicates that you are applying the patch to a QEMU
 source tree of a different version than that which the patch was created
 against.

 I think so, thanks for your help.

 --
 Thanks
 Harry Wei



-- 
Thanks
Harry Wei



[Qemu-devel] [QEMU]Patch for QEMU errors

2013-01-20 Thread harryxiyou
Hi all,

We programmed a block storage(HLFS) patch for QEMU. Therefore,
when i patched this driver for QEMU, it happened to me some errors.
Could anyone give me some suggestions, thanks in advance ;-)

You can see this issue i described in details from
http://code.google.com/p/cloudxy/issues/detail?id=21

You can also see our patch for QEMU here.
http://cloudxy.googlecode.com/svn/trunk/hlfs/patches/hlfs_driver_for_qemu.patch

-- 
Thanks
Harry Wei



[Qemu-devel] [Openstack][Sheepdog][Libvirt][Qemu]Add a new block storage driver by Libvirt/Qemu way for Openstack

2013-01-19 Thread harryxiyou
Hi all,

I wanna add a new block storage driver by Libvirt/Qemu way for Openstack, which
is as same as Sheepdog driver for Openstack. So i think the theories
are like this.

1, In the Openstack Nova branch, Openstck driver call libvirt client
and send parameters
to libvirt client.(From this point, i should modify Openstack Nova
source codes. They are
a, nova/nova/virt/libvirt/driver.pyadd new driver way
b, /OpenStack/nova/nova/tests/test_libvirt.py  add new driver test)

2, According to own protocol, libvirt client in Openstack Nova branch
send parameters to
Libvirt server.(From this point, i should modify libvirt library to
let libvirt library support this
new driver like Sheepdog).

3, Libvirt server call Qemu interfaces to send parameters to
Qemu.(From this point, i should
modify Qemu source codes to let Qemu support this new driver like Sheepdog).

4, In Openstack Cinder branch, Openstack driver use Qemu commands to
create this new volumes
to Qemu.(From this point, i should modify Openstack Cinder branch
source codes like this.
a, Add new driver file
/OpenStack/cinder/cinder/volume/drivers/new_driver.py like Sheepdog.py
b, Change file /OpenStack/cinder/cinder/tests/test_drivers_compatibility.py
to test new driver).

5, At last, i should also modify
/OpenStack/manuals/doc/src/docbkx/openstack-compute-admin/tables/hypervisors-nova-conf.xml
to configure this new driver.

Are my theories right? Should i do any other stuffs? Could anyone give
me any other suggestions?
Thanks in advance ;-)

-- 
Thanks
Harry Wei



Re: [Qemu-devel] [Openstack][Sheepdog][Libvirt][Qemu]Add a new block storage driver by Libvirt/Qemu way for Openstack

2013-01-19 Thread harryxiyou
On Sat, Jan 19, 2013 at 10:04 PM, MORITA Kazutaka
morita.kazut...@gmail.com wrote:
 At Sat, 19 Jan 2013 16:47:37 +0800,
[...]
 If you do the above work, I think you can use your file system with
 OpenStack.


Thanks for your review ;-)

 But I suggest doing them step by step.  If your file system is not
 supported in QEMU, I think libvirt won't support it.  If libvirt
 doesn't support it, OpenStack shouldn't support it too.


Yes, i think so. I will finish this job step by step ;-)



-- 
Thanks
Harry Wei