Re: [zfs-discuss] VMware client solaris 10, RAW physical disk and zfs snapshots problem - all created snapshots are equal to zero.

2010-03-31 Thread Edward Ned Harvey
> I did those test and here are results:
> 
> r...@sl-node01:~# zfs list
> NAMEUSED  AVAIL  REFER  MOUNTPOINT
> mypool01   91.9G   136G23K  /mypool01
> mypool01/storage01 91.9G   136G  91.7G  /mypool01/storage01
> mypool01/storag...@30032010-1  0  -  91.9G  -
> mypool01/storag...@30032010-2  0  -  91.9G  -
> mypool01/storag...@30032010-3  2.15M  -  91.7G  -
> mypool01/storag...@30032010-441K  -  91.7G  -
> mypool01/storag...@30032010-5  1.17M  -  91.7G  -
> mypool01/storag...@30032010-6  0  -  91.7G  -
> mypool02   91.9G   137G24K  /mypool02
> mypool02/copies  23K   137G23K  /mypool02/copies
> mypool02/storage01 91.9G   137G  91.9G  /mypool02/storage01
> mypool02/storag...@30032010-1  0  -  91.9G  -
> mypool02/storag...@30032010-2  0  -  91.9G  -
> 
> As you can see I have differences for snapshot 4,5 and 6 as you
> suggested to make a test. But I can see also changes on snapshot no. 3
> - I complain about this snapshot because I could not see differences
> on it last night! Now it shows.

Well, the first thing you should know is this:  Suppose you take a snapshot,
and create some files.  Then the snapshot still occupies no disk space.
Everything is in the current filesystem.  The only time a snapshot occupies
disk space is when the snapshot contains data that is missing from the
current filesystem.  That is - If you "rm" or overwrite some files in the
current filesystem, then you will see the size of the snapshot growing.
Make sense?

That brings up a question though.  If you did the commands as I wrote them,
it would mean you created a 1G file, took a snapshot, and rm'd the file.
Therefore your snapshot should contain at least 1G.  I am confused by the
fact that you only have 1-2M in your snapshot.  Maybe I messed up the
command I told you, or you messed up entering it on the system, and you only
created a 1M file, instead of a 1G file?


> What is still strange: snapshots 1 and 2 are the oldest but they are
> still equal to zero! After changes and snapshots 3,4,5 and 6 I would
> expect that snapshots 1 and 2 are "recording" also changes on the
> storage01 file system, but not... could it be possible that snapshots
> 1 and 2 are somehow "broken?"

If some file existed during all of the old snapshots, and you destroy your
later snapshots, then the data occupied by the later snapshots will start to
fall onto the older snapshots.  Until you destroy the oldest snapshot that
contained that data.  At which time, the data is truly gone from all of the
snapshots.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VMware client solaris 10, RAW physical disk and zfs snapshots problem - all created snapshots are equal to zero.

2010-03-31 Thread Vladimir Novakovic
On Wed, Mar 31, 2010 at 4:01 AM, Edward Ned Harvey
 wrote:
>> The problem that I have now is that each created snapshot is always
>> equal to zero... zfs just not storing changes that I have made to the
>> file system before making a snapshot.
>>
>>  r...@sl-node01:~# zfs list
>> NAME    USED  AVAIL  REFER  MOUNTPOINT
>> mypool01   91.9G   137G    23K  /mypool01
>> mypool01/storage01 91.9G   137G  91.7G  /mypool01/storage01
>> mypool01/storag...@30032010-1  0  -  91.9G  -
>> mypool01/storag...@30032010-2  0  -  91.9G  -
>> mypool01/storag...@30032010-3  0  -  91.7G  -
>> mypool02   91.9G   137G    24K  /mypool02
>> mypool02/copies  23K   137G    23K  /mypool02/copies
>> mypool02/storage01 91.9G   137G  91.9G  /mypool02/storage01
>> mypool02/storag...@30032010-1  0  -  91.9G  -
>> mypool02/storag...@30032010-2  0  -  91.9G  -
>
> Try this:
> zfs snapshot mypool01/storag...@30032010-4
> dd if=/dev/urandom of=/mypool01/storage01/randomfile bs=1024k count=1024
> zfs snapshot mypool01/storag...@30032010-5
> rm /mypool01/storage01/randomfile
> zfs snapshot mypool01/storag...@30032010-6
> zfs list
>
> And see what happens.
>
>

I did those test and here are results:

r...@sl-node01:~# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
mypool01   91.9G   136G23K  /mypool01
mypool01/storage01 91.9G   136G  91.7G  /mypool01/storage01
mypool01/storag...@30032010-1  0  -  91.9G  -
mypool01/storag...@30032010-2  0  -  91.9G  -
mypool01/storag...@30032010-3  2.15M  -  91.7G  -
mypool01/storag...@30032010-441K  -  91.7G  -
mypool01/storag...@30032010-5  1.17M  -  91.7G  -
mypool01/storag...@30032010-6  0  -  91.7G  -
mypool02   91.9G   137G24K  /mypool02
mypool02/copies  23K   137G23K  /mypool02/copies
mypool02/storage01 91.9G   137G  91.9G  /mypool02/storage01
mypool02/storag...@30032010-1  0  -  91.9G  -
mypool02/storag...@30032010-2  0  -  91.9G  -

As you can see I have differences for snapshot 4,5 and 6 as you
suggested to make a test. But I can see also changes on snapshot no. 3
- I complain about this snapshot because I could not see differences
on it last night! Now it shows.

The only difference is that I have made since yesterday is system restart.

What is still strange: snapshots 1 and 2 are the oldest but they are
still equal to zero! After changes and snapshots 3,4,5 and 6 I would
expect that snapshots 1 and 2 are "recording" also changes on the
storage01 file system, but not... could it be possible that snapshots
1 and 2 are somehow "broken". Anyway this is the first time that I saw
strange snapshot behaving, ok it's true that my configuration win7 /
vmplayer / sol10 / zpool on phisical disk can be the main reason, but
anyway I'm confused that sometimes snapshot "recording" is working (as
I have for 3,4,5 and 6) and sometimes not as I have for snapshot 1 and
2. With hardware Solaris and zpools I didn't have this strange
snapshot behaving.

Definitely something blocked snapshots 1 and 2 to work correctly or? I
will monitor and play with it next days...

Many thanks to Richard Jahnel and Edward Ned Harvey!

Regards,
Vladimir
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VMware client solaris 10, RAW physical disk and zfs snapshots problem - all created snapshots are equal to zero.

2010-03-30 Thread Edward Ned Harvey
> The problem that I have now is that each created snapshot is always
> equal to zero... zfs just not storing changes that I have made to the
> file system before making a snapshot.
> 
>  r...@sl-node01:~# zfs list
> NAME    USED  AVAIL  REFER  MOUNTPOINT
> mypool01   91.9G   137G    23K  /mypool01
> mypool01/storage01 91.9G   137G  91.7G  /mypool01/storage01
> mypool01/storag...@30032010-1  0  -  91.9G  -
> mypool01/storag...@30032010-2  0  -  91.9G  -
> mypool01/storag...@30032010-3  0  -  91.7G  -
> mypool02   91.9G   137G    24K  /mypool02
> mypool02/copies  23K   137G    23K  /mypool02/copies
> mypool02/storage01 91.9G   137G  91.9G  /mypool02/storage01
> mypool02/storag...@30032010-1  0  -  91.9G  -
> mypool02/storag...@30032010-2  0  -  91.9G  -

Try this:
zfs snapshot mypool01/storag...@30032010-4
dd if=/dev/urandom of=/mypool01/storage01/randomfile bs=1024k count=1024
zfs snapshot mypool01/storag...@30032010-5
rm /mypool01/storage01/randomfile
zfs snapshot mypool01/storag...@30032010-6
zfs list

And see what happens.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] VMware client solaris 10, RAW physical disk and zfs snapshots problem - all created snapshots are equal to zero.

2010-03-30 Thread Richard Jahnel
what size is the gz file if you do an incremental send to file?

something like:

zfs send -i sn...@vol sn...@vol | gzip > /somplace/somefile.gz
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] VMware client solaris 10, RAW physical disk and zfs snapshots problem - all created snapshots are equal to zero.

2010-03-30 Thread Vladimir Novakovic
I'm running Windows 7 64bit and VMware player 3 with Solaris 10 64bit
as a client. I have added additional hard drive to virtual Solaris 10
as physical drive. Solaris 10 can see and use already created zpool
without problem. I could also create additional zpool on the other
mounted raw device. I can also synchronize zfs file system to other
physical disk and the other zpool with zfs send/receive.

All those physical disks are presented in Windows 7 and not initialized.

The problem that I have now is that each created snapshot is always
equal to zero... zfs just not storing changes that I have made to the
file system before making a snapshot.

Before each snapshot I have added or deleted different files, so the
snapshot should archive those differences but this is not my case. :-(

My zfs list looks now like this:

 r...@sl-node01:~# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
mypool01   91.9G   137G    23K  /mypool01
mypool01/storage01 91.9G   137G  91.7G  /mypool01/storage01
mypool01/storag...@30032010-1  0  -  91.9G  -
mypool01/storag...@30032010-2  0  -  91.9G  -
mypool01/storag...@30032010-3  0  -  91.7G  -
mypool02   91.9G   137G    24K  /mypool02
mypool02/copies  23K   137G    23K  /mypool02/copies
mypool02/storage01 91.9G   137G  91.9G  /mypool02/storage01
mypool02/storag...@30032010-1  0  -  91.9G  -
mypool02/storag...@30032010-2  0  -  91.9G  -

As you can see each snapshot is equal to zero besides changes that I
have made to zfs content.

I would like to understand what is making virtual Solaris 10
impossible to track those changes with creating a snapshot?

Is it problem with Windows, VMware player or basically with mounted
raw device to virtual machine? Does someone has experinace with this
issue?


Regards,
Vladimir
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss