Re: [Qemu-devel] PATCH: Support for multi-file raw images

2006-05-12 Thread Ryan Lortie
On Fri, 2006-12-05 at 22:21 +0200, Flavio Visentin wrote:
 OT for qemu, but if you use *rsync*, then only the changed part of the
 file are copied, not all the file. Rsync was written just for this
 reason, to avoid copying unneccessary unchanged data.

But as soon as the modification stamp changes rsync still needs to scan
the entire 20GB file to determine what part has changed.  With two
separate drives on a local system this takes exactly as long as just
copying the entire file.

Cheers



___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel


[Qemu-devel] PATCH: Support for multi-file raw images

2006-05-11 Thread Ryan Lortie
Hello.

Attached is a C file (and small patch) to add support for multi-file raw
images to QEMU.  The rationale (for me at least) is as follows:

I use rsync to backup my home directory.  The act of starting up QEMU
changes a 20GB file on my drive.  This causes 20GB of extra copying next
time I do backups.  If I could split the drive image into smaller parts
(maybe 2048 10MB files) then the amount of extra copying is drastically
reduced (since only a few of these files are modified).

There are definitely other reasons that this may be useful.

It works as follows:

1) Create a bunch of files of equal size with names of the form

  harddriveXX

   where XX is a hex number starting from 00 going to whatever.

NB: you can have any number of XXs from 0 (ie: basically a single-file
image) to 6 (ie: up to 16 million parts).

2) Run qemu multi:harddrive000

QEMU will detect the multi-part image, do a sanity check to ensure all
the parts are there, of the same size, and accessible and then start
using the device as the harddrive (this consists of calling 'stat' and
'access' on each file).


Some notes:
 o I've tested only on Linux.  I'm positive the code is not portable
   to other systems.  Feedback about this, please.

 o Included is optional support for limiting the number of open fds.
   Cache eviction is done using a least-recently-opened policy
   (efficiently implemented using a ring buffer).

 o The code makes use of the euidaccess() syscall which is Linux-only.
   BSD has eaccess() to do the same thing.  Both of these calls are
   approximately equal to POSIX access() except that the euid of the
   process is considered instead of the real uid.  The call is used
   to determine if the device should be marked 'read_only' by checking
   for write access to the files comprising the device.  If access()
   is used and QEMU is installed setuid/gid to give the user access to
   a drive image then the result of using access() will be that the
   drive is incorrectly flagged read-only.

 o If the files comprising the device are deleted (for example) while
   QEMU is running then this is quite bad.  Currently this will result
   in read/write requests returning -1.  Maybe it makes sense to panic
   and cause QEMU to exit.

 o All comments welcome.

Cheers.
/*
 * Block driver for multiple-file raw images.
 * Copyright © 2006 Ryan Lortie [EMAIL PROTECTED]
 *
 * This program is free software; you can redistribute it and/or
 * modify it under the terms of the GNU General Public License as
 * published by the Free Software Foundation; either version 2 of the
 * License, or (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * General Public License for more details.
 *
 * You should have received a copy of the GNU General Public
 * License along with this program; if not, write to the
 * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
 * Boston, MA 02111-1307, USA.
 */

#include vl.h
#include block_int.h

/* The maximum number of fds to use. */
#define LIMIT_FDS 15

#ifdef LIMIT_FDS

#define RING_BUFFER_SIZE LIMIT_FDS
struct _RingBuffer
{
  int buffer[RING_BUFFER_SIZE+1];
  int head;
  int tail;
};

typedef struct _RingBuffer RingBuffer;

static RingBuffer *
ring_buffer_new (void)
{
  RingBuffer *ring;

  ring = qemu_mallocz (sizeof (RingBuffer));

  return ring;
}

static int
ring_buffer_get (RingBuffer *ring)
{
  int value;

  if (ring-head == ring-tail)
return -1;

  value = ring-buffer[ring-head];
  ring-head = (ring-head + 1) % (RING_BUFFER_SIZE + 1);

  return value;
}

static int
ring_buffer_put (RingBuffer *ring, int value)
{
  int new_tail;

  new_tail = (ring-tail + 1) % (RING_BUFFER_SIZE + 1);

  if (ring-head == new_tail)
return -1;

  ring-buffer[ring-tail] = value;
  ring-tail = new_tail;

  return 0;
}
#endif

struct _MultiStatus
{
  char *level;
  int *fds;

  char *filename;
  int fnlength;
  char pattern[5];

  int64_t filesize;
  int n_files;
  int read_only;
};

typedef struct _MultiStatus MultiStatus;

typedef ssize_t (*disk_operation) (int, const uint8_t *, ssize_t);

static int
multi_status_fill (MultiStatus *status, const char *filename)
{
  struct stat statbuf;
  int zeros = 0;
  int i;

  if (!strstart(filename, multi:, NULL))
return -1;

  filename += 6;

  for (i = 0; filename[i]; i++)
if (filename[i] == '0')
  zeros++;
else
  zeros = 0;

  if (zeros  6)
zeros = 6;

  status-fnlength = strlen (filename);
  status-filename = qemu_malloc (status-fnlength + 1);
  status-fnlength -= zeros;
  strcpy (status-filename, filename);

  if (zeros == 0)
status-pattern[status-fnlength] = '\0';
  else
sprintf (status-pattern, %%0%ux, zeros);

  status-n_files = 1  (4 * zeros);

  status-read_only = 0;
  for (i = 0; i  status-n_files; i++)
  {
sprintf