> On Nov 7, 2014, at 1:53 PM, Mark Nelson <mark.nel...@inktank.com> wrote:
> 
>> On 11/07/2014 05:01 AM, Luis Pabón wrote:
>> Hi guys,
>> I created a simple test program to visualize the I/O pattern of NetApp’s
>> open source spc-1 workload generator. SPC-1 is an enterprise OLTP type
>> workload created by the Storage Performance Council
>> (http://www.storageperformance.org/results).  Some of the results are
>> published and available here:
>> http://www.storageperformance.org/results/benchmark_results_spc1_active .
>> 
>> NetApp created an open source version of this workload and described it
>> in their publication "A portable, open-source implementation of the
>> SPC-1 workload" (
>> http://www3.lrgl.uqam.ca/csdl/proceedings/iiswc/2005/9461/00/01526014.pdf )
>> 
>> The code is available onGithub: https://github.com/lpabon/spc1 .  All it
>> does at the moment is capture the pattern, no real IO is generated. I
>> will be working on a command line program to enable usage on real block
>> storage systems.  I may either extend fio or create a tool specifically
>> tailored to the requirements needed to run this workload.
> 
> Neat!  integration with fio could be interesting.  We could then use any of 
> the engines include the librbd one (I think there is some kind of gluster 
> engine as well?)

Good point.

> 
>> 
>> On github, I have an example IO pattern for a simulation running 50 mil
>> IOs using HRRW_V2. The simulation ran with an ASU1 (Data Store) size of
>> 45GB, ASU2 (User Store) size of 45GB, and ASU3 (Log) size of 10GB.
> 
> Out of curiosity have you looked at how fast you can generate IOs before CPU 
> is a bottleneck?

Good question, I will check that when i have the io tool generation. 

> 
>> 
>> - Luis
>> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

Reply via email to