Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-17 Thread Rodrigo E . De León Plicet
On Sun, Aug 15, 2010 at 9:13 AM, David Magda  wrote:
> On Aug 14, 2010, at 19:39, Kevin Walker wrote:
>
>> I once watched a video interview with Larry from Oracle, this ass rambled
>> on
>> about how he hates cloud computing and that everyone was getting into
>> cloud
>> computing and in his opinion no one understood cloud computing, apart from
>> him... :-|
>
> If this is the video you're talking about, I think you misinterpreted what
> he meant:
>
>> Cloud computing is not only the future of computing, but it is the
>> present, and the entire past of computing is all cloud. [...] All it is is a
>> computer connected to a network. What do you think Google runs on? Do you
>> think they run on water vapour? It's databases, and operating systems, and
>> memory, and microprocessors, and the Internet. And all of a sudden it's none
>> of that, it's "the cloud". [...] All "the cloud" is, is computers on a
>> network, in terms of technology. In terms of business model, you can say
>> it's rental. All SalesForce.com was, before they were cloud computing, was
>> software-as-a-service, and then they became cloud computing. [...] Our
>> industry is so bizarre: they change a term and think they invented
>> technology.
>
> http://www.youtube.com/watch?v=rmrxN3GWHpM#t=45m
>
> I don't see any inaccurate in what said.

Indeed; even waaay before the SaaSillyness, they were know as service bureaus:

http://drcoddwasright.blogspot.com/2009/07/cloud-lucy-in-sky-with-razorblades.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on Ubuntu

2010-06-25 Thread Rodrigo E . De León Plicet
On Fri, Jun 25, 2010 at 9:08 PM, Erik Trimble  wrote:
> (2) Ubuntu is a desktop distribution. Don't be fooled by their "server"
> version. It's not - it has too many idiosyncrasies and bad design choices to
> be a stable server OS.  Use something like Debian, SLES, or RHEL/CentOS.

Why would you say that?

What "idiosyncrasies and bad design choices" are you talking about?

Just curious.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Native ZFS for Linux

2010-06-10 Thread Rodrigo E . De León Plicet
On Tue, Jun 8, 2010 at 7:14 PM, Anurag Agarwal  wrote:
> We at KQInfotech, initially started on an independent port of ZFS to linux.
> When we posted our progress about port last year, then we came to know about
> the work on LLNL port. Since then we started working on to re-base our
> changing on top Brian's changes.
>
> We are working on porting ZPL on that code. Our current status is that
> mount/unmount is working. Most of the directory operations and read/write is
> also working. There is still lot more development work and testing that
> needs to be going in this. But we are committed to make this happen so
> please stay tuned.


Good times ahead!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Snapshot question

2009-11-13 Thread Rodrigo E . De León Plicet
While reading about NILFS here:

http://www.linux-mag.com/cache/7345/1.html


I saw this:

*One of the most noticeable features of NILFS is that it can "continuously
> and automatically save instantaneous states of the file system without
> interrupting service". NILFS refers to these as checkpoints. In contrast,
> other file systems such as ZFS, can provide snapshots but they have to suspend
> operation to perform the snapshot operation. NILFS doesn’t have to do
> this. The snapshots (checkpoints) are part of the file system design itself.
> *
>

I don't think that's correct. Can someone clarify?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-10 Thread Rodrigo E . De León Plicet
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, difference in reported available space?

2009-06-07 Thread Rodrigo E . De León Plicet
Tim, Nevermind. I should have Read The Fine Manual (tm) from the start.

Says zpool(1M):
---
(...)
zpool list (...)
(...)
This command reports actual physical space available to the storage
pool.  The physical space can be different from the total amount of
space that any contained datasets can actually use. The  amount  of
space  used in a raidz configuration depends on the characteristics
of the data being written. In addition, ZFS reserves some space for
internal  accounting  that  the zfs(1M) command takes into account,
but the zpool command does not. For non-full pools of a  reasonable
size,  these effects should be invisible. For small pools, or pools
that are close to being completely full,  these  discrepancies  may
become more noticeable.
(...)
---

Thanks for your time.

Regards.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, difference in reported available space?

2009-06-07 Thread Rodrigo E . De León Plicet
On Sun, May 31, 2009 at 12:57 PM, Tim wrote:
>
> On Sun, May 31, 2009 at 10:29 AM, Rodrigo E. De León Plicet
>  wrote:
>>
>> Related to the attached file, I just want to understand why, if 'zpool
>> list' reports 191MB available for coolpool, 'df -h|grep cool' only
>> shows 159MB available for coolpool?
>
> That's not a bug, and the behavior isn't changing.  zpool list shows the
> size of the entire pool, including parity disks.  You cannot write user data
> to parity disks/devices.  I suppose you could complain that the zpool list
> command should account for user data as well as parity data being subtracted
> from the entire pool, bug regardless, you shouldn't be using zpool list to
> track your data usage as it doesn't "hide" parity space like the standard
> userland utilities.
>
> df shows the space available minus the parity device(s).

But, in this case, I'm using mirroring. Wouldn't the ZFS parity
overhead apply to raidz/raidz2 only? I confess I'm not 100% clear on
the concepts...

Thanks for your time.

Regards.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS rollback, ORA-00322: log 1 of thread 1 is not current copy (???)

2009-05-31 Thread Rodrigo E . De León Plicet
Nevermind, found the reason. Linux caching was messing with things.

Workaround: http://linux-mm.org/Drop_Caches

Regards.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS, difference in reported available space?

2009-05-31 Thread Rodrigo E . De León Plicet
Hi.

Just to report the following.

Thanks for your time.

Regards.



Forwarded conversation
Subject: Difference in reported available space?


From: Rodrigo E. De León Plicet 
Date: Wed, May 27, 2009 at 1:00 PM
To: zfs-f...@googlegroups.com

Sorry if the following is a dumb question.

Related to the attached file, I just want to understand why, if 'zpool
list' reports 191MB available for coolpool, 'df -h|grep cool' only
shows 159MB available for coolpool?

Thanks for your time.

Regards.

------
From: Rodrigo E. De León Plicet 
Date: Sat, May 30, 2009 at 8:51 PM
To: zfs-f...@googlegroups.com

Anyone?

--
From: Fajar A. Nugraha 
Date: Sun, May 31, 2009 at 5:34 AM
To: zfs-f...@googlegroups.com

Possibly upstream bug.
http://markmail.org/message/zmygvaarfvseipzx

Better ask Sun folks, as it still happens on latest opensolaris
(2009.06) as well.

--
Fajar


r...@localhost:/# uname -a
Linux newage 2.6.24-24-generic #1 SMP Wed Apr 15 15:54:25 UTC 2009 i686 
GNU/Linux

r...@localhost:/# zpool upgrade -v
This system is currently running ZFS pool version 13.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.

r...@localhost:/# for i in 1 2 3 4; do dd if=/dev/zero of=disk$i bs=1024k 
count=100; done

r...@localhost:/# du -h disk*
101Mdisk1
101Mdisk2
101Mdisk3
101Mdisk4

r...@localhost:/# zpool create coolpool mirror /disk1 /disk2

r...@localhost:/# zpool add coolpool mirror /disk3 /disk4

r...@localhost:/# zpool status
  pool: coolpool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
coolpoolONLINE   0 0 0
  mirrorONLINE   0 0 0
/disk1  ONLINE   0 0 0
/disk2  ONLINE   0 0 0
  mirrorONLINE   0 0 0
/disk3  ONLINE   0 0 0
/disk4  ONLINE   0 0 0

errors: No known data errors

r...@localhost:/# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
coolpool   191M78K   191M 0%  ONLINE  -

r...@localhost:/# df -h|grep cool
coolpool  159M   18K  159M   1% /coolpool

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS rollback, ORA-00322: log 1 of thread 1 is not current copy (???)

2009-05-31 Thread Rodrigo E . De León Plicet
Hi.

Using ZFS-FUSE.

$SUBJECT happened 3 out of 5 times while testing, just wanna know if
someone has seen such scenario before.

Steps:



r...@localhost:/# uname -a
Linux localhost 2.6.24-24-generic #1 SMP Wed Apr 15 15:54:25 UTC 2009
i686 GNU/Linux

r...@localhost:/# zpool upgrade -v
This system is currently running ZFS pool version 13.
The following versions are supported:
VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
For more information on a particular version, including supported releases, see:
http://www.opensolaris.org/os/community/zfs/version/N
Where 'N' is the version number.

r...@localhost:/# mv u01 u01.bak

r...@localhost:/# for i in 1 2 3 4; do dd if=/dev/zero of=disk$i
bs=1024k count=2048; done

r...@localhost:/# du -k disk*
2099204 disk1
2099204 disk2
2099204 disk3
2099204 disk4

r...@localhost:/# zpool create coolpool /disk1 /disk2 /disk3 /disk4

r...@localhost:/# zpool status
  pool: coolpool
 state: ONLINE
 scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
coolpoolONLINE   0 0 0
  /disk1ONLINE   0 0 0
  /disk2ONLINE   0 0 0
  /disk3ONLINE   0 0 0
  /disk4ONLINE   0 0 0
errors: No known data errors

r...@localhost:/# zfs create -o mountpoint=/u01 coolpool/u01

r...@localhost:/# cp -av /u01.bak/* /u01/

r...@localhost:/# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
coolpool  7.94G  6.43G  1.51G80%  ONLINE  -

r...@localhost:/# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
coolpool  6.43G  1.39G18K  /coolpool
coolpool/u01  6.43G  1.39G  6.43G  /u01

r...@localhost:/# ls -l /u01/oradata/orcl/
total 2339863
-rw-r- 1 oracle oinstall   9781248 2009-05-31 03:27 control01.ctl
-rw-r- 1 oracle oinstall   9781248 2009-05-31 03:27 control02.ctl
-rw-r- 1 oracle oinstall   9781248 2009-05-31 03:27 control03.ctl
-rw-r- 1 oracle oinstall  26214912 2009-05-31 03:26 redo01.rdo
-rw-r- 1 oracle oinstall  26214912 2009-05-31 03:27 redo02.rdo
-rw-r- 1 oracle oinstall  26214912 2009-05-31 03:26 redo03.rdo
-rw-r- 1 oracle oinstall 473964544 2009-05-31 03:27 sysaux01.dbf
-rw-r- 1 oracle oinstall 159391744 2009-05-31 03:27 sysaux02.dbf
-rw-r- 1 oracle oinstall 602939392 2009-05-31 03:27 system01.dbf
-rw-r- 1 oracle oinstall 214966272 2009-05-31 03:27 system02.dbf
-rw-r- 1 oracle oinstall 125837312 2009-05-31 03:26 temp01.dbf
-rw-r- 1 oracle oinstall 601890816 2009-05-31 03:27 undotbs01.dbf
-rw-r- 1 oracle oinstall 105914368 2009-05-31 03:27 users01.dbf

r...@localhost:/# zfs snapshot coolpool/u...@ok

r...@localhost:/# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
coolpool 6.43G  1.39G18K  /coolpool
coolpool/u01 6.43G  1.39G  6.43G  /u01
coolpool/u...@ok  0  -  6.43G  -

ora...@localhost:/> sqlplus / as sysdba
SQL*Plus: Release 11.1.0.6.0 - Production on Sun May 31 03:38:02 2009
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup
ORACLE instance started.
Total System Global Area  418484224 bytes
Fixed Size  1300324 bytes
Variable Size 218106012 bytes
Database Buffers  192937984 bytes
Redo Buffers6139904 bytes
Database mounted.
Database opened.

SQL> CREATE TABLE FOO(BAR NUMBER);
Table created.

SQL> INSERT INTO FOO VALUES (1);
1 row created.

SQL> COMMIT;
Commit complete.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release
11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

r...@localhost:/# zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
coolpool 6.44G  1.37G18K  /coolpool
coolpool/u01 6.44G  1.37G  6.43G  /u01
coolpool/u...@ok  12.4M  -  6.43G  -

r...@localhost:/# zfs rollback coolpool/u...@ok

r...@localhost:/# ls -l /u01/oradata/orcl/
total 2339863
-rw-r- 1 oracle oinstall   9781248 2009-05-31 03:27 control01.ctl
-rw-r- 1 oracle oinstall   9781248 2009-05-31 03:27 control02.ctl
-rw-r- 1 oracle oinstall   9781248 2009-05-31 03:27 control03.ctl
-rw-r- 1 oracle oinstall  26214912 2009-05-31 03:26 redo01.rdo
-rw-r- 1 oracle oinstall  26214912 2009-05-31 03:27 redo02.rdo
-rw-r- 1 oracle oinstall  26214912 2009-05-31 03:26 redo03.rdo
-rw-r- 1