Re: [one-users] Support for RBD2

2014-02-19 Thread Mark Gergely
Dear Jaime,

at SZTAKI, we are actively using this patch in semi-production environment, and 
so far it is working like a charm. I will let you know if any problem arise.

All the best,
Mark Gergely
MTA SZTAKI LPDS

On 19 Feb 2014, at 11:39, Jaime Melis  wrote:

> Hi,
> 
> we have finished developing support for RBD2:
> http://dev.opennebula.org/issues/2568
> 
> It is mostly based in Bill Campbell's contribution.
> 
> Is there anyone that could test it and give us feedback on the driver?
> 
> The drivers are commited in the 'feature-2568' branch.
> 
> cheers,
> Jaime
> 
> -- 
> Jaime Melis
> Project Engineer
> OpenNebula - Flexible Enterprise Cloud Made Simple
> www.OpenNebula.org | jme...@opennebula.org
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


[one-users] describe terminated instances with econe server

2014-02-05 Thread Mark Gergely
Dear All,

I believe that there is a bug with handling of the 
:describe_with_terminated_instances: econe server configuration variable. The 
variable gets evaluated in the include_terminated_instances? method 
(instance.rb file). The method works like this: 
@config[:describe_with_terminated_instances] || 
DESCRIBE_WITH_TERMINATED_INSTANCES, but the second constant is always true, 
therefore the config parameter won’t affect the return value, it will always be 
true.
I think that the second constant is useless and it should be removed.
also there is a spare equal mark in the default config file: 
:describe_with_terminated_instances: = true

All the best,
Mark Gergely
MTA SZTAKI LPDS___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] econe-describe-images command in ON 4.4

2014-02-05 Thread Mark Gergely
Dear All,

i suggest that there should be a configuration variable in the 
econe-server.conf file which influences this behavior. We really got used to 
the way it worked previously, that every image could be listed or started 
through the EC2 server regardless of the image upload method or tags.
i propose that if the describe_only_ec2_images variable is true, then it should 
work like in 4.4.1, if it is false, then it should list every image which is 
not a snapshot or an ebs volume.
Tell me what you think, if it looks fine for you, then i can prepare a patch.

All the best,
Mark Gergely
MTA SZTAKI LPDS

On 05 Feb 2014, at 15:08, Daniel Molina  wrote:

> Hi Hyunwoo,
> 
> 
> On 5 February 2014 13:20, Hyun Woo Kim  wrote:
> Hi, 
> 
> I have recently deployed ON 4.4 and have been testing EC2 (econe-) commands.
> 
> First I could do econe-upload successfully.
> The new image can be seen with oneimage list command,
> but econe-describe-images just shows an empty list.
> (This command is working in our ON 3.2 EC2 system).
> 
> What has changed? Or am I missing a configuration step?
> 
> In one 4.4, after using econe-upload or just to make an OpenNebula image 
> available through econe, the econe-register is necessary. This command will 
> include the EC2_AMI=YES tag in the template and then it will be listed in the 
> describe-images command and you will be able to use it through the ec2 api.
> 
> More information on the one-4.4 changes can be found here:
> http://docs.opennebula.org/stable/release_notes44/compatibility.html#ec2-server
> 
> Cheers
>  
> 
> Thanks very much
> Hyunwoo 
> FermiCloud Project
> 
> 
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> 
> 
> 
> 
> -- 
> --
> Daniel Molina
> Project Engineer
> OpenNebula - Flexible Enterprise Cloud Made Simple
> www.OpenNebula.org | dmol...@opennebula.org | @OpenNebula
> ___
> Users mailing list
> Users@lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org

___
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


Re: [one-users] Possible race condition in iSCSI datastore.

2012-12-06 Thread Mark Gergely
Dear Ruben, Alain,

our improved iSCSI driver set that we proposed before should solve this issue. 
As mentioned in the ticket, it is possible to simultaneously start hundreds of 
non persistent virtual machines.
The TM concurrency level is 15.
You can check the details at: http://dev.opennebula.org/issues/1592

All the best,
Mark Gergely
MTA-SZTAKI LPDS

On 2012.12.06., at 20:01, "Ruben S. Montero"  wrote:

> Hi Alain,
> 
> You are totally right, this may be a problem when instantiated
> multiple VMs at the same time.  I've filled an issue to look for the
> best way to generate the TID [1].
> 
> We'd be interested in updating the tgtadm_next_tid function in
> scripts_common.sh. Also if the tgt server is getting overloaded by
> this simultaneous deployments, there are several ways to limit the
> concurrency of the TM (e.g. the -t option in oned.conf)
> 
> THANKS for the feedback!
> 
> Ruben
> 
> [1]  http://dev.opennebula.org/issues/1682
> 
> [1] http://dev.opennebula.org/issues/1682
> 
> On Thu, Dec 6, 2012 at 1:52 PM, Alain Pannetrat
>  wrote:
>> Hi all,
>> 
>> I'm new to OpenNebula and this mailing list, so forgive me if I
>> stumble over a topic that may have already been discussed.
>> 
>> I'm currently discovering opennebula 3.8.1 with a simple 3 node
>> system: a control node, a compute node and a datastore node
>> (iscsi+lvm).
>> 
>> I have been testing the bulk instantiation of virtual machines in
>> sunstone, where I initiate the bulk creation of 8 virtual machines in
>> parallel. I have noticed that between 2 and 4 machines just fail to
>> instantiate correctly with the typical following error message:
>> 
>> 08 2012 [TM][I]: Command execution fail:
>> /var/lib/one/remotes/tm/iscsi/clone
>> iqn.2012-02.org.opennebula:san.vg-one.lv-one-26
>> compute.admin.lan:/var/lib/one//datastores/0/111/disk.0 111 101
>> Thu Dec  6 14:40:08 2012 [TM][E]: clone: Command "set -e
>> Thu Dec  6 14:40:08 2012 [TM][I]: set -x
>> Thu Dec  6 14:40:08 2012 [TM][I]:
>> Thu Dec  6 14:40:08 2012 [TM][I]: # get size
>> Thu Dec  6 14:40:08 2012 [TM][I]: SIZE=$(sudo lvs --noheadings -o
>> lv_size "/dev/vg-one/lv-one-26")
>> Thu Dec  6 14:40:08 2012 [TM][I]:
>> Thu Dec  6 14:40:08 2012 [TM][I]: # create lv
>> Thu Dec  6 14:40:08 2012 [TM][I]: sudo lvcreate -L${SIZE} vg-one -n
>> lv-one-26-111
>> Thu Dec  6 14:40:08 2012 [TM][I]:
>> Thu Dec  6 14:40:08 2012 [TM][I]: # clone lv with dd
>> Thu Dec  6 14:40:08 2012 [TM][I]: sudo dd if=/dev/vg-one/lv-one-26
>> of=/dev/vg-one/lv-one-26-111 bs=64k
>> Thu Dec  6 14:40:08 2012 [TM][I]:
>> Thu Dec  6 14:40:08 2012 [TM][I]: # new iscsi target
>> Thu Dec  6 14:40:08 2012 [TM][I]: TID=$(sudo tgtadm --lld iscsi --op
>> show --mode target | grep "Target" | tail -n 1 |
>>  awk '{split($2,tmp,":"); print tmp[1]+1;}')
>> Thu Dec  6 14:40:08 2012 [TM][I]:
>> Thu Dec  6 14:40:08 2012 [TM][I]: sudo tgtadm --lld iscsi --op new
>> --mode target --tid $TID  --targetname
>> iqn.2012-02.org.opennebula:san.vg-one.lv-one-26-111
>> Thu Dec  6 14:40:08 2012 [TM][I]: sudo tgtadm --lld iscsi --op bind
>> --mode target --tid $TID -I ALL
>> Thu Dec  6 14:40:08 2012 [TM][I]: sudo tgtadm --lld iscsi --op new
>> --mode logicalunit --tid $TID  --lun 1 --backing-store
>> /dev/vg-one/lv-one-26-111
>> Thu Dec  6 14:40:08 2012 [TM][I]: sudo tgt-admin --dump |sudo tee
>> /etc/tgt/targets.conf > /dev/null 2>&1" failed: + sudo lvs
>> --noheadings -o lv_size /dev/vg-one/lv-one-26
>> Thu Dec  6 14:40:08 2012 [TM][I]: 131072+0 records in
>> Thu Dec  6 14:40:08 2012 [TM][I]: 131072+0 records out
>> Thu Dec  6 14:40:08 2012 [TM][I]: 8589934592 bytes (8.6 GB) copied,
>> 898.903 s, 9.6 MB/s
>> Thu Dec  6 14:40:08 2012 [TM][I]: tgtadm: this target already exists
>> Thu Dec  6 14:40:08 2012 [TM][E]: Error cloning
>> compute.admin.lan:/dev/vg-one/lv-one-26-111
>> Thu Dec  6 14:40:08 2012 [TM][I]: ExitCode: 22
>> Thu Dec  6 14:40:08 2012 [TM][E]: Error executing image transfer
>> script: Error cloning compute.admin.lan:/dev/vg-one/lv-one-26-111
>> Thu Dec  6 14:40:09 2012 [DiM][I]: New VM state is FAILED
>> 
>> After adding traces in the code, I found that there seems to be a race
>> condition in /var/lib/one/remotes/tm/iscsi/clone here the following
>> commands get executed:
>> 
>> TID=\$($SUDO $(tgtadm_next_tid))
>> $SUDO $(tgtadm_target_new "\$TID" "$NEW_IQN")
>> 
>> These commands are typically expanded to something like this:
>>