Re: [zfs-discuss] [illumos-Developer] zfs refratio property

2011-06-06 Thread Haudy Kazemi
On 6/6/2011 5:02 PM, Richard Elling wrote: On Jun 6, 2011, at 2:54 PM, Yuri Pankov wrote: On Mon, Jun 06, 2011 at 02:19:50PM -0700, Matthew Ahrens wrote: I have implemented a new property for ZFS, refratio, which is the compression ratio for referenced space (the compressratio is the ratio

Re: [zfs-discuss] Any limit on pool hierarchy?

2010-11-09 Thread Haudy Kazemi
Maurice Volaski wrote: I think my initial response got mangled. Oops. creating a ZFS pool out of files stored on another ZFS pool. The main reasons that have been given for not doing this are unknown edge and corner cases that may lead to deadlocks, and that it creates a complex structure

Re: [zfs-discuss] Any limit on pool hierarchy?

2010-11-08 Thread Haudy Kazemi
Bryan Horstmann-Allen wrote: +-- | On 2010-11-08 13:27:09, Peter Taps wrote: | | From zfs documentation, it appears that a vdev can be built from more vdevs. That is, a raidz vdev can be built across a bunch of mirrored

Re: [zfs-discuss] Drive randomly being removed from pool

2010-11-08 Thread Haudy Kazemi
besson3c wrote: This has happened to me several times now, I'm confused as to why... This one particular drive, and its always the same drive, randomly shows up as being removed from the pool. I have to export and import the pool in order to have this disk seen again and for re-silvering to

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-11-01 Thread Haudy Kazemi
Ross Walker wrote: On Nov 1, 2010, at 5:09 PM, Ian D rewar...@hotmail.com wrote: Maybe you are experiencing this: http://opensolaris.org/jive/thread.jspa?threadID=11942 It does look like this... Is this really the expected behaviour? That's just unacceptable. It is so bad it

Re: [zfs-discuss] Jumping ship.. what of the data

2010-10-27 Thread Haudy Kazemi
Finding PCIe x1 cards with more than 2 SATA ports is difficult so you might want to make sure that either your chosen motherboard has lots of PCIe slots or has some wider slots. If you plan on using on-board video and re-using the x16 slot for something else, you should verify that the BIOS

Re: [zfs-discuss] How does dedup work over iSCSI?

2010-10-22 Thread Haudy Kazemi
Neil Perrin wrote: On 10/22/10 15:34, Peter Taps wrote: Folks, Let's say I have a volume being shared over iSCSI. The dedup has been turned on. Let's say I copy the same file twice under different names at the initiator end. Let's say each file ends up taking 5 blocks. For dedupe to

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-22 Thread Haudy Kazemi
One thing suspicious is that we notice a slow down of one pool when the other is under load. How can that be? Ian A network switch that is being maxed out? Some switches cannot switch at rated line speed on all their ports all at the same time. Their internal buses simply don't have

Re: [zfs-discuss] Newbie ZFS Question: RAM for Dedup

2010-10-22 Thread Haudy Kazemi
Never Best wrote: Sorry I couldn't find this anywhere yet. For deduping it is best to have the lookup table in RAM, but I wasn't too sure how much RAM is suggested? ::Assuming 128KB Block Sizes, and 100% unique data: 1TB*1024*1024*1024/128 = 8388608 Blocks ::Each Block needs 8 byte pointer?

Re: [zfs-discuss] vdev failure - pool loss ?

2010-10-22 Thread Haudy Kazemi
Bob Friesenhahn wrote: On Tue, 19 Oct 2010, Cindy Swearingen wrote: unless you use copies=2 or 3, in which case your data is still safe for those datasets that have this option set. This advice is a little too optimistic. Increasing the copies property value on datasets might help in some

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-22 Thread Haudy Kazemi
Tim Cook wrote: On Fri, Oct 22, 2010 at 10:40 PM, Haudy Kazemi kaze0...@umn.edu mailto:kaze0...@umn.edu wrote: One thing suspicious is that we notice a slow down of one pool when the other is under load. How can that be? Ian A network switch

Re: [zfs-discuss] Pools inside pools

2010-09-23 Thread Haudy Kazemi
Mattias Pantzare wrote: On Wed, Sep 22, 2010 at 20:15, Markus Kovero markus.kov...@nebula.fi wrote: Such configuration was known to cause deadlocks. Even if it works now (which I don't expect to be the case) it will make your data to be cached twice. The CPU utilization will also be

Re: [zfs-discuss] Pools inside pools

2010-09-23 Thread Haudy Kazemi
Erik Trimble wrote: On 9/22/2010 11:15 AM, Markus Kovero wrote: Such configuration was known to cause deadlocks. Even if it works now (which I don't expect to be the case) it will make your data to be cached twice. The CPU utilization will also be much higher, etc. All in all I strongly

Re: [zfs-discuss] Pools inside pools

2010-09-23 Thread Haudy Kazemi
Markus Kovero wrote: What is an example of where a checksummed outside pool would not be able to protect a non-checksummed inside pool? Would an intermittent RAM/motherboard/CPU failure that only corrupted the inner pool's block before it was passed to the outer pool (and did not corrupt the

Re: [zfs-discuss] resilver = defrag?

2010-09-13 Thread Haudy Kazemi
Richard Elling wrote: On Sep 13, 2010, at 5:14 AM, Edward Ned Harvey wrote: From: Richard Elling [mailto:rich...@nexenta.com] This operational definition of fragmentation comes from the single- user, single-tasking world (PeeCees). In that world, only one thread writes files from one

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-09 Thread Haudy Kazemi
Comment at end... Mattias Pantzare wrote: On Wed, Sep 8, 2010 at 15:27, Edward Ned Harvey sh...@nedharvey.com wrote: From: pantz...@gmail.com [mailto:pantz...@gmail.com] On Behalf Of Mattias Pantzare It is about 1 vdev with 12 disk or 2 vdev with 6 disks. If you have 2 vdev you have to

Re: [zfs-discuss] Suggested RaidZ configuration...

2010-09-09 Thread Haudy Kazemi
Erik Trimble wrote: On 9/9/2010 2:15 AM, taemun wrote: Erik: does that mean that keeping the number of data drives in a raidz(n) to a power of two is better? In the example you gave, you mentioned 14kb being written to each drive. That doesn't sound very efficient to me. (when I say the

Re: [zfs-discuss] 4k block alignment question (X-25E)

2010-08-31 Thread Haudy Kazemi
Christopher George wrote: What is a NVRAM based SSD? It is simply an SSD (Solid State Drive) which does not use Flash, but does use power protected (non-volatile) DRAM, as the primary storage media. http://en.wikipedia.org/wiki/Solid-state_drive I consider the DDRdrive X1 to be a

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-18 Thread Haudy Kazemi
Ross Walker wrote: On Aug 18, 2010, at 10:43 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Wed, 18 Aug 2010, Joerg Schilling wrote: Linus is right with his primary decision, but this also applies for static linking. See Lawrence Rosen for more information, the GPL does

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-17 Thread Haudy Kazemi
BM wrote: On Tue, Aug 17, 2010 at 5:11 AM, Andrej Podzimek and...@podzimek.org wrote: I did not say there is something wrong about published reports. I often read them. (Who doesn't?) However, there are no trustworthy reports on this topic yet, since Btrfs is unfinished. Let's see some

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Haudy Kazemi
David Dyer-Bennet wrote: On Sun, August 15, 2010 20:44, Peter Jeremy wrote: Irrespective of the above, there is nothing requiring Oracle to release any future btrfs or ZFS improvements (or even bugfixes). They can't retrospectively change the license on already released code but they can

[zfs-discuss] ZFS pool and filesystem version list, OpenSolaris builds list

2010-08-15 Thread Haudy Kazemi
Hello, This is a consolidated list of ZFS pool and filesystem versions, along with the builds and systems they are found in. It is based on multiple online sources. Some of you may find it useful in figuring out where things are at across the spectrum of systems supporting ZFS including

Re: [zfs-discuss] ZFS diaspora (was Opensolaris is apparently dead)

2010-08-15 Thread Haudy Kazemi
For the ZFS diaspora: 1.) For the immediate and near term future (say 1 year), what makes a better choice for a new install of a ZFS-class filesystem? Would it be FreeBSD 8 with it's older ZFS version (pool version 14), or NexentaCore with newer ZFS (pool version 25(?) ), NexentaStor, or

Re: [zfs-discuss] Raidz - what is stored in parity?

2010-08-11 Thread Haudy Kazemi
Peter Taps wrote: Hi Eric, Thank you for your help. At least one part is clear now. I still am confused about how the system is still functional after one disk fails. Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it simple let's not consider block sizes.

Re: [zfs-discuss] 1tb SATA drives

2010-07-24 Thread Haudy Kazemi
But if it were just the difference between 5min freeze when a drive fails, and 1min freeze when a drive fails, I don't see that anyone would care---both are bad enough to invoke upper-layer application timeouts of iSCSI connections and load balancers, but not disastrous. but it's not. ZFS

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Haudy Kazemi
Could it somehow not be compiling 64-bit support? -- Brent Jones I thought about that but it says when it boots up that it is 64-bit, and I'm able to run 64-bit binaries. I wonder if it's compiling for the wrong processor optomization though? Maybe if it is missing some of the newer

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Haudy Kazemi
A few things: 1.) did you move your drives around or change which controller each one was connected to sometime after installing and setting up OpenSolaris? If so, a pool export and re-import may be in order. 2.) are you sure the drive is failing? Does the problem only affect this drive

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Haudy Kazemi
3.) on some systems I've found another version of the iostat command to be more useful, particularly when iostat -En leaves the serial number field empty or otherwise doesn't read the serial number correctly. Try this: ' iostat -Eni ' indeed outputs Device ID on some of the

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Haudy Kazemi
Marty Scholes wrote: ' iostat -Eni ' indeed outputs Device ID on some of the drives,but I still can't understand how it helps me to identify model of specific drive. Get and install smartmontools. Period. I resisted it for a few weeks but it has been an amazing tool. It will tell

Re: [zfs-discuss] Help identify failed drive

2010-07-19 Thread Haudy Kazemi
Yuri Homchuk wrote: Well, this is a REALLY 300 users production server with 12 VM's running on it, so I definitely won't play with a firmware J I can easily identify which drive is what by physically looking at it. It's just sad to realize that I cannot trust solaris anymore. I never

Re: [zfs-discuss] ZFS on Ubuntu

2010-07-19 Thread Haudy Kazemi
Rodrigo E. De León Plicet wrote: On Fri, Jun 25, 2010 at 9:08 PM, Erik Trimble erik.trim...@oracle.com wrote: (2) Ubuntu is a desktop distribution. Don't be fooled by their server version. It's not - it has too many idiosyncrasies and bad design choices to be a stable server OS. Use

Re: [zfs-discuss] nfs share of nested zfs directories?

2010-05-27 Thread Haudy Kazemi
Brandon High wrote: On Thu, May 27, 2010 at 1:02 PM, Cassandra Pugh cp...@pppl.gov wrote: I was wondering if there is a special option to share out a set of nested directories? Currently if I share out a directory with /pool/mydir1/mydir2 on a system, mydir1 shows up, and I can see

Re: [zfs-discuss] USB Flashdrive as SLOG?

2010-05-25 Thread Haudy Kazemi
Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Kyle McDonald I've been thinking lately that I'm not sure I like the root pool being unprotected, but I can't afford to give up another drive bay. I'm guessing

Re: [zfs-discuss] New SSD options

2010-05-22 Thread Haudy Kazemi
Bob Friesenhahn wrote: On Fri, 21 May 2010, Don wrote: You could literally split a sata cable and add in some capacitors for just the cost of the caps themselves. The issue there is whether the caps would present too large a current drain on initial charge up- If they do then you need to add

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Haudy Kazemi
Brian wrote: Sometimes when it hangs on boot hitting space bar or any key won't bring it back to the command line. That is why I was wondering if there was a way to not show the splashscreen at all, and rather show what it was trying to load when it hangs. Look at these threads:

Re: [zfs-discuss] dedup status

2010-05-16 Thread Haudy Kazemi
Erik Trimble wrote: Roy Sigurd Karlsbakk wrote: Hi all I've been doing a lot of testing with dedup and concluded it's not really ready for production. If something fails, it can render the pool unuseless for hours or maybe days, perhaps due to single-threded stuff in zfs. There is also very

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-16 Thread Haudy Kazemi
I don't really have an explanation. Perhaps flaky second controller hardware that only works sometimes and can corrupt pools? Have you seen any other strangeness/instability on this computer? Did you use zpool export before moving the disks the first time to the second controller, or did

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Haudy Kazemi
Can you recreate the problem with a second pool on a second set of drives, like I described in my earlier post? Right now it seems like your problem is mostly due to the missing log device. I'm wondering if that missing log device is what messed up the initial move to the other controller,

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-15 Thread Haudy Kazemi
Jan Hellevik wrote: Yes, I can try to do that. I do not have any more of this brand of disk, but I guess that does not matter. It will have to wait until tomorrow (I have an appointment in a few minutes, and it is getting late here in Norway), but I will try first thing tomorrow. I guess a

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Haudy Kazemi
Now that you've re-imported, it seems like zpool clear may be the command you need, based on discussion in these links about missing and broken zfs logs: http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg37554.html http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg30469.html

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Haudy Kazemi
Is there any chance that the second controller wrote something onto the disks when it saw the disks attached to it, thus corrupting the ZFS drive signatures or more? I've heard that some controllers require drives to be initialized by them and/or signatures written to drives by them. Maybe

Re: [zfs-discuss] Exporting iSCSI - it's still getting all the ZFS protection, right?

2010-05-07 Thread Haudy Kazemi
Brandon High wrote: On Mon, May 3, 2010 at 4:33 PM, Michael Shadle mike...@gmail.com wrote: Is ZFS doing it's magic checksumming and whatnot on this share, even though it is seeing junk data (NTFS on top of iSCSI...) or am I not getting any benefits from this setup at all (besides thin

Re: [zfs-discuss] Thoughts on drives for ZIL/L2ARC?

2010-04-25 Thread Haudy Kazemi
Travis Tabbal wrote: I have a few old drives here that I thought might help me a little, though not at much as a nice SSD, for those uses. I'd like to speed up NFS writes, and there have been some mentions that even a decent HDD can do this, though not to the same level a good SSD will. The

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-24 Thread Haudy Kazemi
aneip wrote: I really new to zfs and also raid. I have 3 hard disk, 500GB, 1TB, 1.5TB. On each HD i wanna create 150GB partition + remaining space. I wanna create raidz for 3x150GB partition. This is for my document + photo. You should be able to create 150 GB slices on each drive, and

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-23 Thread Haudy Kazemi
Sunil wrote: If you like, you can later add a fifth drive relatively easily by replacing one of the slices with a whole drive. how does this affect my available storage if I were to replace both of those sparse 500GB files with a real 1TB drive? Will it be same? Or will I have

Re: [zfs-discuss] Can RAIDZ disks be slices ?

2010-04-22 Thread Haudy Kazemi
Ian Collins wrote: On 04/20/10 04:13 PM, Sunil wrote: Hi, I have a strange requirement. My pool consists of 2 500GB disks in stripe which I am trying to convert into a RAIDZ setup without data loss but I have only two additional disks: 750GB and 1TB. So, here is what I thought: 1. Carve a

Re: [zfs-discuss] Fileserver help.

2010-04-18 Thread Haudy Kazemi
Any comments on NexentaStor Community/Developer Edition vs EON for NAS/small server/home server usage? It seems like Nexenta has been around longer or at least received more press attention. Are there strong reasons to recommend one over the other? (At one point usable space would have been

Re: [zfs-discuss] unsetting/resetting ZFS properties

2009-08-13 Thread Haudy Kazemi
In short, I think an alias for 'zfs inherit' could be added to 'zfs set' to make it more clear to those of us still new to ZFS. Either that, or add some additional pointers in the Properties documentation that the set command can't unset/reset properties. That would to me be confusing it

Re: [zfs-discuss] utf8only and normalization properties

2009-08-13 Thread Haudy Kazemi
Nicolas Williams wrote: On Wed, Aug 12, 2009 at 06:17:44PM -0500, Haudy Kazemi wrote: I'm wondering what are some use cases for ZFS's utf8only and normalization properties. They are off/none by default, and can only be set when the filesystem is created. When should they specifically

[zfs-discuss] utf8only and normalization properties

2009-08-12 Thread Haudy Kazemi
Hello, I'm wondering what are some use cases for ZFS's utf8only and normalization properties. They are off/none by default, and can only be set when the filesystem is created. When should they specifically be enabled and/or disabled? (i.e. Where is using them a really good idea? Where is

Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-07-23 Thread Haudy Kazemi
chris wrote: Ok, so the choice for a MB boils down to: - Intel desktop MB, no ECC support This is mostly true. The exceptions are some implementations of the Socket T LGA 775 (i.e. late Pentium 4 series, and Core 2) D975X and X38 chipsets, and possibly some X48 boards as well. Intel's

Re: [zfs-discuss] Single disk parity

2009-07-10 Thread Haudy Kazemi
Richard Elling wrote: There are many error correcting codes available. RAID2 used Hamming codes, but that's just one of many options out there. Par2 uses configurable strength Reed-Solomon to get multi bit error correction. The par2 source is available, although from a ZFS perspective is

Re: [zfs-discuss] Single disk parity

2009-07-09 Thread Haudy Kazemi
Adding additional data protection options are commendable. On the other hand I feel there are important gaps in the existing feature set that are worthy of a higher priority, not the least of which is the automatic recovery of uberblock / transaction group problems (see Victor Latushkin's

Re: [zfs-discuss] Open Solaris version recommendation? b114, b117?

2009-07-03 Thread Haudy Kazemi
Jorgen Lundman wrote: We have been told we can have support for OpenSolaris finally, so we can move the ufs on zvol over to zfs with user-quotas. Does anyone have any feel for the versions of Solaris that has zfs user quotas? We will put it on the x4540 for customers. I have run b114 for

Re: [zfs-discuss] ZFS, power failures, and UPSes (and ZFS recovery guide links)

2009-07-01 Thread Haudy Kazemi
Ian Collins wrote: David Magda wrote: On Jun 30, 2009, at 14:08, Bob Friesenhahn wrote: I have seen UPSs help quite a lot for short glitches lasting seconds, or a minute. Otherwise the outage is usually longer than the UPSs can stay up since the problem required human attention. A standby

[zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Haudy Kazemi
Hello, I've looked around Google and the zfs-discuss archives but have not been able to find a good answer to this question (and the related questions that follow it): How well does ZFS handle unexpected power failures? (e.g. environmental power failures, power supply dying, etc.) Does it

Re: [zfs-discuss] Narrow escape!

2009-06-23 Thread Haudy Kazemi
scrub: resilver completed after 5h50m with 0 errors on Tue Jun 23 05:04:18 2009 Zero errors even though other parts of the message definitely show errors? This is described here: http://docs.sun.com/app/docs/doc/819-5461/gbcve?a=view Device errors do not guarantee pool errors when redundancy

Re: [zfs-discuss] recover data after zpool create

2009-06-19 Thread Haudy Kazemi
Kees Nuyt wrote: On Fri, 19 Jun 2009 11:50:07 PDT, stephen bond no-re...@opensolaris.org wrote: Kees, is it possible to get at least the contents of /export/home ? that is supposedly a separate file system. That doesn't mean that data is in one particular spot on the disk. The

Re: [zfs-discuss] APPLE: ZFS need bug corrections instead of new func! Or?

2009-06-19 Thread Haudy Kazemi
I think a better question would be: what kind of tests would be most promising for turning some subclass of these lost pools reported on the mailing list into an actionable bug? my first bet would be writing tools that test for ignored sync cache commands leading to lost writes, and apply them

Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-18 Thread Haudy Kazemi
Bob Friesenhahn wrote: On Wed, 17 Jun 2009, Haudy Kazemi wrote: usable with very little CPU consumed. If the system is dedicated to serving files rather than also being used interactively, it should not matter much what the CPU usage is. CPU cycles can't be stored for later use. Ultimately

Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-17 Thread Haudy Kazemi
Bob Friesenhahn wrote: On Mon, 15 Jun 2009, Bob Friesenhahn wrote: On Mon, 15 Jun 2009, Rich Teer wrote: You actually have that backwards. :-) In most cases, compression is very desirable. Performance studies have shown that today's CPUs can compress data faster

Re: [zfs-discuss] compression at zfs filesystem creation

2009-06-17 Thread Haudy Kazemi
David Magda wrote: On Tue, June 16, 2009 15:32, Kyle McDonald wrote: So the cache saves not only the time to access the disk but also the CPU time to decompress. Given this, I think it could be a big win. Unless you're in GIMP working on JPEGs, or doing some kind of MPEG video

Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting assistance.

2009-04-22 Thread Haudy Kazemi
Brad Hill wrote: I've seen reports of a recent Seagate firmware update bricking drives again. What's the output of 'zpool import' from the LiveCD? It sounds like ore than 1 drive is dropping off. r...@opensolaris:~# zpool import pool: tank id: 16342816386332636568 state: FAULTED

Re: [zfs-discuss] Diverse, Dispersed, Distributed, Unscheduled RAID volumes

2008-04-25 Thread Haudy Kazemi
Brandon High wrote: On Fri, Apr 25, 2008 at 4:48 AM, kilamanjaro [EMAIL PROTECTED] wrote: Is ZFS ready today to link a set of dispersed desktop computers (diverse operating systems) into a distributed RAID volume that supports desktops It sounds like you'd want to use something like

Re: [zfs-discuss] Diverse, Dispersed, Distributed, Unscheduled RAID volumes

2008-04-25 Thread Haudy Kazemi
Brandon High wrote: On Fri, Apr 25, 2008 at 4:48 AM, kilamanjaro [EMAIL PROTECTED] wrote: Is ZFS ready today to link a set of dispersed desktop computers (diverse operating systems) into a distributed RAID volume that supports desktops It sounds like you'd want to use something like

Re: [zfs-discuss] incorrect/conflicting suggestion in error message on a faulted pool

2008-04-09 Thread Haudy Kazemi
a FAULTED pool instead of a DEGRADED pool.) Thanks, -hk Haudy Kazemi wrote: Hello, I'm writing to report what I think is an incorrect or conflicting suggestion in the error message displayed on a faulted pool that does not have redundancy (equiv to RAID0?). I ran across this while

Re: [zfs-discuss] ZFS Compression algorithms - Project Proposal

2007-07-09 Thread Haudy Kazemi
On Jul 9 2007, Domingos Soares wrote: Hi, It might be interesting to focus on compression algorithms which are optimized for particular workloads and data types, an Oracle database for example. Yes, I agree. That is what I meant when I said The study might be extended to the analysis of data