Hieu,

If I understand the objective correctly, you are trying to reduce the
IO associated with a desktop "start of day" boot storm.  In your
proposal, you're effectively wanting to move the CloudStack secondary
storage concept to include a locally attached storage device which is
SSD based.  While that seems viable in concept, in practice with
XenServer your proposed flow could cause a bunch of issues.  Some of
the challenges I see include:

- XenServer hosts with multiple independent local storage are very
rare.  See this KB article covering how to create such storage:
http://support.citrix.com/article/CTX121313
- By default local storage is LVM based, but to enable thin
provisioning you'll want EXT3.  See this blog for how to convert to
EXT3: 
http://thinworldblog.blogspot.com/2011/08/enabling-thin-provisioning-on-existing.html
- It seems like you're planning on using Storage XenMotion to move the
VHD from the golden primary storage to normal primary storage, but
that's going to move the entire VHD chain and it will do so over the
network.  Here's a blog article describing a bit about how it works:
http://blogs.citrix.com/2012/08/24/storage_xenmotion/.  I'm reasonably
certain the design parameters didn't include local->local without
network.
- If someone wants to take a snapshot of the VM, will that snapshot
then got to normal secondary storage or back to the golden master?
- To Punith's point, I *think* VM start will occur post clone, so the
clone will consume network to occur and then will start on local
storage.

The big test I'd like to see first would be creating the golden master
and from it creating a few VMs.  Then once you have those VMs run some
normal XenServer operations like moving a VM within a pool, moving
that VM across pools and assigning a home server.  If those pass, then
things might work out, but if those fail then you'll need to sort
things out within the XenServer code first. If these basic tests do
work, then I'd look at the network usage to see if things did indeed
get better.

-tim

On Thu, Jun 5, 2014 at 8:11 AM, Punith S <punit...@cloudbyte.com> wrote:
> hi Hieu,
>
> after going through your  "Golden Primary Storage" proposal , from my
> understanding you are creating a SSD golden PS for holding parent
> VDH(nothing but the template which go copied from secondary storage) and a
> normal primary storage for ROOT volumes(child VHD) for the corresponding
> vm's.
>
> from the following flowchart , i have the following questions,
>
> 1. since you are having problem with slow boot time of the vm's, will the
> booting of the vm's happen in golden PS, ie while cloning ?
>      if so, the spawning of the vm's will be always fast .
>
>     but i see you are starting the vm after moving the cloned vhd to the
> ROOT PS and pointing the child vhd to its parent vhd on the GOLDEN PS,
>     hence , there will be a network traffic between these two
> primary storages, which will obviously slow down the vm's performance
> forever.
>
> 2. what if someone removes the golden primary storage containing the the
> parent VHD(template) where all the child VDH's in the root primary storage
> are been pointed to ?
>    if so, all vm's running will be crashed immediately. since its child
> vhd's parent is removed.
>
> thanks
>
>
> On Thu, Jun 5, 2014 at 8:59 AM, Hieu LE <hieul...@gmail.com> wrote:
>
>> Mike, Punith,
>>
>> Please review "Golden Primary Storage" proposal. [1]
>>
>> Thank you.
>>
>> [1]:
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Golden+Primary+Storage
>>
>>
>> On Wed, Jun 4, 2014 at 10:32 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com> wrote:
>>
>>> Daan helped out with this. You should be good to go now.
>>>
>>>
>>> On Tue, Jun 3, 2014 at 8:50 PM, Hieu LE <hieul...@gmail.com> wrote:
>>>
>>> > Hi Mike,
>>> >
>>> > Could you please give edit/create permission on ASF Jira/Wiki
>>> confluence ?
>>> > I can not add a new Wiki page.
>>> >
>>> > My Jira ID: hieulq
>>> > Wiki: hieulq89
>>> > Review Board: hieulq
>>> >
>>> > Thanks !
>>> >
>>> >
>>> > On Wed, Jun 4, 2014 at 9:17 AM, Mike Tutkowski <
>>> > mike.tutkow...@solidfire.com
>>> > > wrote:
>>> >
>>> > > Hi,
>>> > >
>>> > > Yes, please feel free to add a new Wiki page for your design.
>>> > >
>>> > > Here is a link to applicable design info:
>>> > >
>>> > > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Design
>>> > >
>>> > > Also, feel free to ask more questions and have me review your design.
>>> > >
>>> > > Thanks!
>>> > > Mike
>>> > >
>>> > >
>>> > > On Tue, Jun 3, 2014 at 7:29 PM, Hieu LE <hieul...@gmail.com> wrote:
>>> > >
>>> > > > Hi Mike,
>>> > > >
>>> > > > You are right, performance will be decreased over time because
>>> writes
>>> > > IOPS
>>> > > > will always end up on slower storage pool.
>>> > > >
>>> > > > In our case, we are using CloudStack integrated in VDI solution to
>>> > > provived
>>> > > > pooled VM type[1]. So may be my approach can bring better UX for
>>> user
>>> > > with
>>> > > > lower bootime ...
>>> > > >
>>> > > > A short change in design are followings
>>> > > > - VM will be deployed with golden primary storage if primary
>>> storage is
>>> > > > marked golden and this VM template is also marked as golden.
>>> > > > - Choosing the best deploy destionation for both golden primary
>>> storage
>>> > > and
>>> > > > normal root volume primary storage. Chosen host can also access both
>>> > > > storage pools.
>>> > > > - New Xen Server plug-in for modifying VHD parent id.
>>> > > >
>>> > > > Is there some place for me to submit my design and code. Can I
>>> write a
>>> > > new
>>> > > > proposal in CS wiki ?
>>> > > >
>>> > > > [1]:
>>> > > >
>>> > > >
>>> > >
>>> >
>>> http://support.citrix.com/proddocs/topic/xendesktop-rho/cds-choose-scheme-type-rho.html
>>> > > >
>>> > > >
>>> > > > On Mon, Jun 2, 2014 at 9:04 PM, Mike Tutkowski <
>>> > > > mike.tutkow...@solidfire.com
>>> > > > > wrote:
>>> > > >
>>> > > > > It is an interesting idea. If the constraints you face at your
>>> > company
>>> > > > can
>>> > > > > be corrected somewhat by implementing this, then you should go for
>>> > it.
>>> > > > >
>>> > > > > It sounds like writes will be placed on the slower storage pool.
>>> This
>>> > > > means
>>> > > > > as you update OS components, those updates will be placed on the
>>> > slower
>>> > > > > storage pool. As such, your performance is likely to somewhat
>>> > decrease
>>> > > > over
>>> > > > > time (as more and more writes end up on the slower storage pool).
>>> > > > >
>>> > > > > That may be OK for your use case(s), though.
>>> > > > >
>>> > > > > You'll have to update the storage-pool orchestration logic to take
>>> > this
>>> > > > new
>>> > > > > scheme into account.
>>> > > > >
>>> > > > > Also, we'll have to figure out how this ties into storage tagging
>>> (if
>>> > > at
>>> > > > > all).
>>> > > > >
>>> > > > > I'd be happy to review your design and code.
>>> > > > >
>>> > > > >
>>> > > > > On Mon, Jun 2, 2014 at 1:54 AM, Hieu LE <hieul...@gmail.com>
>>> wrote:
>>> > > > >
>>> > > > > > Thanks Mike and Punith for quick reply.
>>> > > > > >
>>> > > > > > Both solutions you gave here are absolutely correct. But as I
>>> > > mentioned
>>> > > > > in
>>> > > > > > the first email, I want another better solution for current
>>> > > > > infrastructure
>>> > > > > > at my company.
>>> > > > > >
>>> > > > > > Creating a high IOPS primary storage using storage tags is good
>>> but
>>> > > it
>>> > > > > will
>>> > > > > > be very waste of disk capacity. For example, if I only have 1TB
>>> SSD
>>> > > and
>>> > > > > > deploy 100 VM from a 100GB template.
>>> > > > > >
>>> > > > > > So I think about a solution where a high IOPS primary storage
>>> can
>>> > > only
>>> > > > > > store golden image (master image), and a child image of this VM
>>> > will
>>> > > be
>>> > > > > > stored in another normal (NFS, ISCSI...) storage. In this case,
>>> > with
>>> > > > 1TB
>>> > > > > > SSD Primary Storage I can store as much golden image as I need.
>>> > > > > >
>>> > > > > > I have also tested it with 256 GB SSD mounted on Xen Server
>>> 6.2.0
>>> > > with
>>> > > > > 2TB
>>> > > > > > local storage 10000RPM, 6TB NFS share storage with 1GB network.
>>> The
>>> > > > IOPS
>>> > > > > of
>>> > > > > > VMs which have golden image (master image) in SSD and child
>>> image
>>> > in
>>> > > > NFS
>>> > > > > > increate more than 30-40% compare with VMs which have both
>>> golden
>>> > > image
>>> > > > > and
>>> > > > > > child image in NFS. The boot time of each VM is also decrease.
>>> > > ('cause
>>> > > > > > golden image in SSD only reduced READ IOPS).
>>> > > > > >
>>> > > > > > Do you think this approach OK ?
>>> > > > > >
>>> > > > > >
>>> > > > > > On Mon, Jun 2, 2014 at 12:50 PM, Mike Tutkowski <
>>> > > > > > mike.tutkow...@solidfire.com> wrote:
>>> > > > > >
>>> > > > > > > Thanks, Punith - this is similar to what I was going to say.
>>> > > > > > >
>>> > > > > > > Any time a set of CloudStack volumes share IOPS from a common
>>> > pool,
>>> > > > you
>>> > > > > > > cannot guarantee IOPS to a given CloudStack volume at a given
>>> > time.
>>> > > > > > >
>>> > > > > > > Your choices at present are:
>>> > > > > > >
>>> > > > > > > 1) Use managed storage (where you can create a 1:1 mapping
>>> > between
>>> > > a
>>> > > > > > > CloudStack volume and a volume on a storage system that has
>>> QoS).
>>> > > As
>>> > > > > > Punith
>>> > > > > > > mentioned, this requires that you purchase storage from a
>>> vendor
>>> > > who
>>> > > > > > > provides guaranteed QoS on a volume-by-volume bases AND has
>>> this
>>> > > > > > integrated
>>> > > > > > > into CloudStack.
>>> > > > > > >
>>> > > > > > > 2) Create primary storage in CloudStack that is not managed,
>>> but
>>> > > has
>>> > > > a
>>> > > > > > high
>>> > > > > > > number of IOPS (ex. using SSDs). You can then storage tag this
>>> > > > primary
>>> > > > > > > storage and create Compute and Disk Offerings that use this
>>> > storage
>>> > > > tag
>>> > > > > > to
>>> > > > > > > make sure their volumes end up on this storage pool (primary
>>> > > > storage).
>>> > > > > > This
>>> > > > > > > will still not guarantee IOPS on a CloudStack volume-by-volume
>>> > > basis,
>>> > > > > but
>>> > > > > > > it will at least place the CloudStack volumes that need a
>>> better
>>> > > > chance
>>> > > > > > of
>>> > > > > > > getting higher IOPS on a storage pool that could provide the
>>> > > > necessary
>>> > > > > > > IOPS. A big downside here is that you want to watch how many
>>> > > > CloudStack
>>> > > > > > > volumes get deployed on this primary storage because you'll
>>> need
>>> > to
>>> > > > > > > essentially over-provision IOPS in this primary storage to
>>> > increase
>>> > > > the
>>> > > > > > > probability that each and every CloudStack volume that uses
>>> this
>>> > > > > primary
>>> > > > > > > storage gets the necessary IOPS (and isn't as likely to suffer
>>> > from
>>> > > > the
>>> > > > > > > Noisy Neighbor Effect). You should be able to tell CloudStack
>>> to
>>> > > only
>>> > > > > > use,
>>> > > > > > > say, 80% (or whatever) of the storage you're providing to it
>>> (so
>>> > as
>>> > > > to
>>> > > > > > > increase your effective IOPS per GB ratio). This
>>> > over-provisioning
>>> > > of
>>> > > > > > IOPS
>>> > > > > > > to control Noisy Neighbors is avoided in option 1. In that
>>> > > situation,
>>> > > > > you
>>> > > > > > > only provision the IOPS and capacity you actually need. It is
>>> a
>>> > > much
>>> > > > > more
>>> > > > > > > sophisticated approach.
>>> > > > > > >
>>> > > > > > > Thanks,
>>> > > > > > > Mike
>>> > > > > > >
>>> > > > > > >
>>> > > > > > > On Sun, Jun 1, 2014 at 11:36 PM, Punith S <
>>> > punit...@cloudbyte.com>
>>> > > > > > wrote:
>>> > > > > > >
>>> > > > > > > > hi hieu,
>>> > > > > > > >
>>> > > > > > > > your problem is the bottle neck we see as a storage vendors
>>> in
>>> > > the
>>> > > > > > cloud,
>>> > > > > > > > meaning all the vms in the cloud have not been guaranteed
>>> iops
>>> > > from
>>> > > > > the
>>> > > > > > > > primary storage, because in your case i'm assuming you are
>>> > > running
>>> > > > > > > 1000vms
>>> > > > > > > > on a xen cluster whose all vm's disks are lying on a same
>>> > primary
>>> > > > nfs
>>> > > > > > > > storage mounted to the cluster,
>>> > > > > > > > hence you won't get the dedicated iops for each vm since
>>> every
>>> > vm
>>> > > > is
>>> > > > > > > > sharing the same storage. to solve this issue in cloudstack
>>> we
>>> > > the
>>> > > > > > third
>>> > > > > > > > party vendors have implemented the plugin(namely cloudbyte ,
>>> > > > > solidfire
>>> > > > > > > etc)
>>> > > > > > > > to support managed storage(dedicated volumes with guaranteed
>>> > qos
>>> > > > for
>>> > > > > > each
>>> > > > > > > > vms) , where we are mapping each root disk(vdi) or data disk
>>> > of a
>>> > > > vm
>>> > > > > > with
>>> > > > > > > > one nfs or iscsi share coming out of a pool, also we are
>>> > > proposing
>>> > > > > the
>>> > > > > > > new
>>> > > > > > > > feature to change volume iops on fly in 4.5, where you can
>>> > > increase
>>> > > > > or
>>> > > > > > > > decrease your root disk iops while booting or at peak times.
>>> > but
>>> > > to
>>> > > > > use
>>> > > > > > > > this plugin you have to buy our storage solution.
>>> > > > > > > >
>>> > > > > > > > if not , you can try creating a nfs share out of ssd pool
>>> > storage
>>> > > > and
>>> > > > > > > > create a primary storage in cloudstack out of it named as
>>> > golden
>>> > > > > > primary
>>> > > > > > > > storage with specific tag like gold, and create a compute
>>> > > offering
>>> > > > > for
>>> > > > > > > your
>>> > > > > > > > template with the storage tag as gold, hence all the vm's
>>> you
>>> > > > create
>>> > > > > > will
>>> > > > > > > > sit on this gold primary storage with high iops. and other
>>> data
>>> > > > disks
>>> > > > > > on
>>> > > > > > > > other primary storage but still here you cannot guarantee
>>> the
>>> > qos
>>> > > > at
>>> > > > > vm
>>> > > > > > > > level.
>>> > > > > > > >
>>> > > > > > > > thanks
>>> > > > > > > >
>>> > > > > > > >
>>> > > > > > > > On Mon, Jun 2, 2014 at 10:12 AM, Hieu LE <
>>> hieul...@gmail.com>
>>> > > > wrote:
>>> > > > > > > >
>>> > > > > > > >> Hi all,
>>> > > > > > > >>
>>> > > > > > > >> There are some problems while deploying a large amount of
>>> VMs
>>> > in
>>> > > > my
>>> > > > > > > >> company
>>> > > > > > > >> with CloudStack. All VMs are deployed from same template
>>> (e.g:
>>> > > > > Windows
>>> > > > > > > 7)
>>> > > > > > > >> and the quantity is approximately ~1000VMs. The problems
>>> here
>>> > is
>>> > > > low
>>> > > > > > > IOPS,
>>> > > > > > > >> low performance of VM (about ~10-11 IOPS, boot time is very
>>> > > high).
>>> > > > > The
>>> > > > > > > >> storage of my company is SAN/NAS with NFS and Xen Server
>>> > 6.2.0.
>>> > > > All
>>> > > > > > Xen
>>> > > > > > > >> Server nodes have standard server HDD disk raid.
>>> > > > > > > >>
>>> > > > > > > >> I have found some solutions for this such as:
>>> > > > > > > >>
>>> > > > > > > >>    - Enable Xen Server Intellicache and some tweaks in
>>> > > CloudStack
>>> > > > > > codes
>>> > > > > > > to
>>> > > > > > > >>    deploy and start VM in Intellicache mode. But this
>>> solution
>>> > > > will
>>> > > > > > > >> transfer
>>> > > > > > > >>    all IOPS from shared storage to all local storage, hence
>>> > > affect
>>> > > > > and
>>> > > > > > > >> limit
>>> > > > > > > >>    some CloudStack features.
>>> > > > > > > >>    - Buying some expensive storage solutions and network to
>>> > > > increase
>>> > > > > > > IOPS.
>>> > > > > > > >>    Nah..
>>> > > > > > > >>
>>> > > > > > > >> So, I am thinking about a new feature that (may be)
>>> increasing
>>> > > > IOPS
>>> > > > > > and
>>> > > > > > > >> performance of VMs:
>>> > > > > > > >>
>>> > > > > > > >>    1. Separate golden image in high IOPS partition: buying
>>> new
>>> > > > SSD,
>>> > > > > > plug
>>> > > > > > > >> in
>>> > > > > > > >>    Xen Server and deployed a new VM in NFS storage WITH
>>> golden
>>> > > > image
>>> > > > > > in
>>> > > > > > > >> this
>>> > > > > > > >>    new SSD partition. This can reduce READ IOPS in shared
>>> > > storage
>>> > > > > and
>>> > > > > > > >> decrease
>>> > > > > > > >>    boot time of VM. (Currenty, VM deployed in Xen Server
>>> > always
>>> > > > > have a
>>> > > > > > > >> master
>>> > > > > > > >>    image (golden image - in VMWare) always in the same
>>> storage
>>> > > > > > > repository
>>> > > > > > > >> with
>>> > > > > > > >>    different image (child image)). We can do this trick by
>>> > > > tweaking
>>> > > > > in
>>> > > > > > > VHD
>>> > > > > > > >>    header file with new Xen Server plug-in.
>>> > > > > > > >>    2. Create golden primary storage and VM template that
>>> > enable
>>> > > > this
>>> > > > > > > >>    feature.
>>> > > > > > > >>    3. So, all VMs deployed from template that had enabled
>>> this
>>> > > > > feature
>>> > > > > > > >> will
>>> > > > > > > >>    have a golden image stored in golden primary storage
>>> (SSD
>>> > or
>>> > > > some
>>> > > > > > > high
>>> > > > > > > >> IOPS
>>> > > > > > > >>    partition), and different image (child image) stored in
>>> > other
>>> > > > > > normal
>>> > > > > > > >>    primary storage.
>>> > > > > > > >>
>>> > > > > > > >> This new feature will not transfer all IOPS from shared
>>> > storage
>>> > > to
>>> > > > > > local
>>> > > > > > > >> storage (because high IOPS partition can be another high
>>> IOPS
>>> > > > shared
>>> > > > > > > >> storage) and require less money than buying new storage
>>> > > solution.
>>> > > > > > > >>
>>> > > > > > > >> What do you think ? If possible, may I write a proposal in
>>> > > > > CloudStack
>>> > > > > > > >> wiki ?
>>> > > > > > > >>
>>> > > > > > > >> BRs.
>>> > > > > > > >>
>>> > > > > > > >> Hieu Lee
>>> > > > > > > >>
>>> > > > > > > >> --
>>> > > > > > > >> -----BEGIN GEEK CODE BLOCK-----
>>> > > > > > > >> Version: 3.1
>>> > > > > > > >> GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$
>>> > ULC++++(++)$ P
>>> > > > > > > >> L++(+++)$ E
>>> > > > > > > >> !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+
>>> b+(++)>+++
>>> > DI-
>>> > > > D+
>>> > > > > G
>>> > > > > > > >> e++(+++) h-- r(++)>+++ y-
>>> > > > > > > >> ------END GEEK CODE BLOCK------
>>> > > > > > > >>
>>> > > > > > > >
>>> > > > > > > >
>>> > > > > > > >
>>> > > > > > > > --
>>> > > > > > > > regards,
>>> > > > > > > >
>>> > > > > > > > punith s
>>> > > > > > > > cloudbyte.com
>>> > > > > > > >
>>> > > > > > >
>>> > > > > > >
>>> > > > > > >
>>> > > > > > > --
>>> > > > > > > *Mike Tutkowski*
>>> > > > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>> > > > > > > e: mike.tutkow...@solidfire.com
>>> > > > > > > o: 303.746.7302
>>> > > > > > > Advancing the way the world uses the cloud
>>> > > > > > > <http://solidfire.com/solution/overview/?video=play>*™*
>>> > > > > > >
>>> > > > > >
>>> > > > > >
>>> > > > > >
>>> > > > > > --
>>> > > > > > -----BEGIN GEEK CODE BLOCK-----
>>> > > > > > Version: 3.1
>>> > > > > > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$ P
>>> > > > > L++(+++)$
>>> > > > > > E
>>> > > > > > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI-
>>> > D+ G
>>> > > > > > e++(+++) h-- r(++)>+++ y-
>>> > > > > > ------END GEEK CODE BLOCK------
>>> > > > > >
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > --
>>> > > > > *Mike Tutkowski*
>>> > > > > *Senior CloudStack Developer, SolidFire Inc.*
>>> > > > > e: mike.tutkow...@solidfire.com
>>> > > > > o: 303.746.7302
>>> > > > > Advancing the way the world uses the cloud
>>> > > > > <http://solidfire.com/solution/overview/?video=play>*™*
>>> > > > >
>>> > > >
>>> > > >
>>> > > >
>>> > > > --
>>> > > > -----BEGIN GEEK CODE BLOCK-----
>>> > > > Version: 3.1
>>> > > > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$ P
>>> > > L++(+++)$
>>> > > > E
>>> > > > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+
>>> G
>>> > > > e++(+++) h-- r(++)>+++ y-
>>> > > > ------END GEEK CODE BLOCK------
>>> > > >
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > > *Mike Tutkowski*
>>> > > *Senior CloudStack Developer, SolidFire Inc.*
>>> > > e: mike.tutkow...@solidfire.com
>>> > > o: 303.746.7302
>>> > > Advancing the way the world uses the cloud
>>> > > <http://solidfire.com/solution/overview/?video=play>*™*
>>> > >
>>> >
>>> >
>>> >
>>> > --
>>> > -----BEGIN GEEK CODE BLOCK-----
>>> > Version: 3.1
>>> > GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$ P
>>> L++(+++)$
>>> > E
>>> > !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+ G
>>> > e++(+++) h-- r(++)>+++ y-
>>> > ------END GEEK CODE BLOCK------
>>> >
>>>
>>>
>>>
>>> --
>>> *Mike Tutkowski*
>>> *Senior CloudStack Developer, SolidFire Inc.*
>>> e: mike.tutkow...@solidfire.com
>>> o: 303.746.7302
>>> Advancing the way the world uses the cloud
>>> <http://solidfire.com/solution/overview/?video=play>*™*
>>>
>>
>>
>>
>> --
>> -----BEGIN GEEK CODE BLOCK-----
>> Version: 3.1
>> GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C++++(++++)$ ULC++++(++)$ P L++(+++)$
>> E !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+ G
>> e++(+++) h-- r(++)>+++ y-
>> ------END GEEK CODE BLOCK------
>>
>
>
>
> --
> regards,
>
> punith s
> cloudbyte.com

Reply via email to