Re: [ceph-users] Which API to use to write and read objects

2014-01-13 Thread Wido den Hollander

On 01/14/2014 08:40 AM, Randy Breunling wrote:

New to CEPH...so I'm on the learning-curve here.
Have been through a lot of the documentation but am confused on one
thing, still.

What is the API or interface to use if we just want to write and read
objects to CEPH object storage...and don't necessarily care about
compatibility with Amazon s3 or Openstack SWIFT?
Do we librados?


Yes. For librados various language bindings are available:
* Python
* PHP
* Java
* Erlang
* Ruby
* Perl

Be aware, librados does NOT stripe your objects. RBD, CephFS and the 
RADOS Gateway all strip in 4MB objects (configurable). librados does not 
do striping, so you will have to implement striping there.




Or maybe asked a different way...when would someone want or need to use
the s3- or SWIFT-compatible API interfaces to CEPH (RADOS).



For easier integration. Existing applications which already talk S3 can 
simply use Ceph without changes required to the application.



Thanks...

--Randy


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Which API to use to write and read objects

2014-01-13 Thread Randy Breunling
New to CEPH...so I'm on the learning-curve here.
Have been through a lot of the documentation but am confused on one thing,
still.

What is the API or interface to use if we just want to write and read
objects to CEPH object storage...and don't necessarily care about
compatibility with Amazon s3 or Openstack SWIFT?
Do we librados?

Or maybe asked a different way...when would someone want or need to use the
s3- or SWIFT-compatible API interfaces to CEPH (RADOS).

Thanks...

--Randy
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] crush choose firstn vs. indep

2014-01-13 Thread ZHOU Yuan
Hi Loic, thanks for the education!

I’m also trying to understand the new ‘indep’ mode. Is this new mode
designed for Ceph-EC only? It seems that all of the data in 3-copy system
are equivalent and this new algorithm should also work?


Sincerely, Yuan


On Mon, Jan 13, 2014 at 7:37 AM, Loic Dachary  wrote:

>
>
> On 12/01/2014 15:55, Dietmar Maurer wrote:
> > From the docs:
> >
> >
> >
> > step [choose|chooseleaf] [firstn|indep]  
> >
> >
> >
> > What exactly is the difference between ‘firstn’ and ‘indep’?
> >
> Hi,
>
> For Ceph releases up to Emperor[1], firstn is used and I'm not aware of a
> use case requiring indep. As part of the effort to implement erasure coded
> pools, firstn[2] and indep[3] were separated in two functions. The firstn
> method is best suited for replicated pools. The indep method tries to
> minimize the position changes in case an OSD becomes unavailable. For
> instance, if indep finds
>
>   [1,2,3,4]
>
> and after a while 3 become unavailable, it is very likely to replace it
> with
>
>   [1,2,5,4]
>
> It matters to erasure coded pools because
>
>   [4,5,2,1]
>
> (i.e. the same OSDs but in different positions), implies more I/O. Another
> difference is that in the case of a mapping failure (i.e. unable to find
> the required number of OSDs), firstn will return a short list ( for
> instance [1,2,3] when 4 are required ) and indep will return a list with a
> placeholder at the missing position ( for instance [1,2,CRUSH_ITEM_NONE,4]
> ).
>
> Cheers
>
> [1] implementation in releases up to Emperor
> https://github.com/ceph/ceph/blob/v0.72/src/crush/mapper.c#L295
> [2] firstn https://github.com/ceph/ceph/blob/v0.74/src/crush/mapper.c#L295
> [3] indep https://github.com/ceph/ceph/blob/v0.74/src/crush/mapper.c#L459
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Inconsistent pgs after update to 0.73 - > 0.74

2014-01-13 Thread Mark Kirkwood

On 10/01/14 17:16, Mark Kirkwood wrote:

On 10/01/14 16:18, David Zafman wrote:
With pool size of 1 the scrub can still do some consistency 
checking.  These are things like missing attributes, on-disk size 
doesn’t match attribute size, non-clone without a head, expected 
clone.  You could check the osd logs to see what they were.


The pg below only had 1 object in error and it was detected after 
2013-12-13 15:38:13.283741 which was the last clean scrub.





Right - this is what I'm seeing in the log (not immediately clear why 
there is a problem, and why it needs to be seemingly trivially fixed 
by a repair:


2014-01-10 15:53:52.654065 7f17ff94f700  0 log [ERR] : 2.3f scrub stat 
mismatch, got 44/44 objects, 0/0 clones, 44/0 dirty, 0/0 whiteouts, 
180355088/180355088 bytes.
2014-01-10 15:53:52.654079 7f17ff94f700  0 log [ERR] : 2.3f scrub 1 
errors





I think I see why this message (and hence the error) is appearing in the 
log, In commit 43465d (merge pull request #950 from wip-pg-stat) extra 
checks were added to osd/replicatedPG.cc for dirty and whiteout. In the 
log I have dirty stat mismatch - so that explains why I didn't see any 
errors until I updated to this year (merge was Jan 1).  This still 
leaves the question of why the stat counts were wrong to start with...


Regards

Mark
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] admin user for the object gateway?

2014-01-13 Thread Blair Nilsson
ok, it was a bug, and it was fixed :)

Running a newer version of Emperor, and all is well with the world.


On Fri, Jan 10, 2014 at 1:54 PM, Blair Nilsson wrote:

> Emperor
>
> Mostly our evaluation is going pretty well, but things like this are
> driving us crazy. Less crazy then when we were evaluating swift however :)
>
> The part I am most worried about is we have no idea how to even start
> finding out what is happening here. The admin interface has very little
> documentation, and when it is obviously working (since we can get some
> answers out of it) and it just decided it doesn't want us to be able to see
> some stuff it is a bit of a problem.
>
>
> On Wed, Jan 8, 2014 at 4:29 PM, Wido den Hollander  wrote:
>
>> On 01/08/2014 04:25 AM, Blair Nilsson wrote:
>>
>>> nope, doesn't work...
>>>
>>> I have an admin user... with the right caps.
>>>
>>>
>>>
>>> { "user_id": "admin2",
>>>"display_name": "Admin 2admin",
>>>"email": "",
>>>"suspended": 0,
>>>"max_buckets": 1000,
>>>"auid": 0,
>>>"subusers": [],
>>>"keys": [
>>>  { "user": "admin2",
>>>"access_key": "1DNQ2FK80XQZJMB14W1C",
>>>"secret_key": "BJDKNhMnCc4Cib+3QIdSGMR4yOE0YVJVS9HCuAmW"},
>>>  { "user": "admin2",
>>>"access_key": "KXH0BM1IQ9CP24IB9IP9",
>>>"secret_key": "wbtya+dX505X7zdfKKh926nbbRtBnLW8ghHAQo9j"}],
>>>"swift_keys": [],
>>>"caps": [
>>>  { "type": "buckets",
>>>"perm": "*"},
>>>  { "type": "usage",
>>>"perm": "*"},
>>>  { "type": "users",
>>>"perm": "*"}],
>>>"op_mask": "read, write, delete",
>>>"default_placement": "",
>>>"placement_tags": [],
>>>"bucket_quota": { "enabled": false,
>>>"max_size_kb": -1,
>>>"max_objects": -1}}
>>>
>>> but
>>>
>>> ./s3curl.pl  --id=admin --
>>>
>>> http://162.243.33.180/admin/user
>>>
>>> gives me
>>>
>>> {"Code":"AccessDenied"}
>>>
>>> however... I CAN use
>>>
>>> ./s3curl.pl  --id=admin --
>>>
>>> http://162.243.33.180/admin/bucket
>>>
>>> and it gives me.
>>>
>>> ["files.wyaeld.com ","private.wyaeld.com
>>> "]
>>>
>>>
>>> which are the 2 buckets in the system.
>>>
>>> Any ideas on what is going on?
>>>
>>>
>> So what Ceph version do you use? Since I've been running into the same
>> problem.
>>
>> I could however query for a user, but a PUT request to create a user
>> would always give me AccessDenied.
>>
>> I'm running 0.67.5 Dumpling.
>>
>> Wido
>>
>>
>>> On Fri, Dec 20, 2013 at 7:47 PM, JuanJose Galvez
>>> mailto:juanjose.gal...@inktank.com>>
>>> wrote:
>>>
>>> On 12/19/2013 2:02 PM, Blair Nilsson wrote:
>>>
 How do find or create a user that can use the admin operations for
 the object gateway?

 The manual says "Some operations require that the user holds
 special administrative capabilities."

 But I can't find if there is a pre setup user with these, or how
 to create one myself.

>>> You would need to create the user. As an example I just created the
>>> following on my cluster:
>>>
>>> radosgw-admin user create --uid=admin --display-name="JuanJose
>>> Galvez" --caps="usage=read, write; users=read, write; buckets=read,
>>> write;"
>>>
>>> You'll notice in the output that it has the following capabilities
>>> which normal users do not have:
>>>
>>>"caps": [
>>>  { "type": "buckets",
>>>"perm": "*"},
>>>  { "type": "usage",
>>>"perm": "*"},
>>>  { "type": "users",
>>>"perm": "*"}],
>>>
>>> I hope that helps. If you need more information on the API and what
>>> caps are needed for which functions that is found over here:
>>> http://ceph.com/docs/master/radosgw/adminops/
>>>





 ___
 ceph-users mailing list
 ceph-users@lists.ceph.com  
 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>>>
>>>
>>> --
>>> JuanJose "JJ" Galvez
>>> Professional Services
>>> Inktank Storage, Inc.
>>> LinkedIn:http://www.linkedin.com/in/jjgalvez
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com 
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>> --
>> Wido den Hollander
>> 42on B.V.
>>
>> Phone: +31 (0)20 700 9902
>> Skype: contact42on
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists

[ceph-users] Howto investigate and restore "zombie" images

2014-01-13 Thread Guilhem Bonnefille
Hi,

As it is my really first message here, here is a short description of
me^Wmy project.

I'm a totally newbie. I'm currently playing with Ceph in order to discover
it features. My project is to use it in a sort of friendnet based on small
nodes connected via ADSL (xDSL) links.

As a really first experience, I started a cluster with 3 nodes and a mon on
a single machine. Specs are ridiculusly small: 256 MB of RAM, 2 OSDs on 8
GB USB keys and 1 OSD on 80 GB on LVM partition.

My last experiment was a big failure. While trying to format a RBD
partition (mapped on the same machine) I encountered a kernel panic. I
tried twice, creating dedicated images and the same thing occured: kernel
panic.

The real matter is not the data loss, but the current situation : I have a
pool with 3~4 images that I cannot delete nor map.

test:~# rados ls --pool icwt
rb.0.1071.74b0dc51.
...
partage.rbd
...
rbd_directory
rb.0.139d.2ae8944a.00c0
rb.0.139d.2ae8944a.00a0


# rbd info -p icwt partage --secret /etc/ceph/client.admin
rbd: error opening image partage: (95) Operation not supported
2014-01-13 22:19:43.136065 b66c4740 -1 librbd: Error listing snapshots:
(95) Operation not supported

# rbd map icwt/partage --secret /etc/ceph/client.admin
rbd: add failed: (34) Numerical result out of range

test:~# rbd rm icwt/partage --secret /etc/ceph/client.admin
2014-01-13 22:22:03.458105 b679f740 -1 librbd: Error listing snapshots:
(95) Operation not supported
Removing image: 0% complete...failed.
rbd: delete error: (95) Operation not supported
2014-01-13 22:22:03.503661 b679f740 -1 librbd: error getting id of image


I imagine it is something really trivial, but I do not understand what's
going wrong. Can someone give me a tip or something to read?

Thanks in advance.
-- 
Guilhem BONNEFILLE
-=- JID: gu...@im.apinc.org MSN: guilhem_bonnefi...@hotmail.com
-=- mailto:guilhem.bonnefi...@gmail.com
-=- http://nathguil.free.fr/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] mon not binding to public interface

2014-01-13 Thread Jeff Bachtel
I've got a cluster with 3 mons, all of which are binding solely to a 
cluster network IP, and neither to 0.0.0.0:6789 nor a public IP. I 
hadn't noticed the problem until now because it makes little difference 
in how I normally use Ceph (rbd and radosgw), but now that I'm trying to 
use cephfs it's obviously suboptimal.


[global]
  auth cluster required = cephx
  auth service required = cephx
  auth client required = cephx
  keyring = /etc/ceph/keyring
  cluster network = 10.100.10.0/24
  public network = 10.100.0.0/21
  public addr = 10.100.0.150
  cluster addr = 10.100.10.1
 
  fsid = de10594a-0737-4f34-a926-58dc9254f95f


[mon]
  cluster network = 10.100.10.0/24
  public network = 10.100.0.0/21
  mon data = /var/lib/ceph/mon/mon.$id

[mon.controller1]
  host = controller1
  mon addr = 10.100.10.1:6789
  public addr = 10.100.0.150
  cluster addr = 10.100.10.1
  cluster network = 10.100.10.0/24
  public network = 10.100.0.0/21

And then with /usr/bin/ceph-mon -i controller1 --debug_ms 12 --pid-file 
/var/run/ceph/mon.controller1.pid -c /etc/ceph/ceph.conf I get in logs


2014-01-13 14:19:13.578458 7f195e6d97a0  0 ceph version 0.72.2 
(a913ded2ff138aefb8cb84d347d72164099cfd60), process ceph-mon, pid 7559
2014-01-13 14:19:13.641639 7f195e6d97a0 10 -- :/0 rank.bind 10.100.10.1:6789/0
2014-01-13 14:19:13.641668 7f195e6d97a0 10 accepter.accepter.bind
2014-01-13 14:19:13.642773 7f195e6d97a0 10 accepter.accepter.bind bound to 
10.100.10.1:6789/0
2014-01-13 14:19:13.642800 7f195e6d97a0  1 -- 10.100.10.1:6789/0 learned my 
addr 10.100.10.1:6789/0
2014-01-13 14:19:13.642808 7f195e6d97a0  1 accepter.accepter.bind my_inst.addr 
is 10.100.10.1:6789/0 need_addr=0

Whith no mention of public addr (10.100.2.1) or public network 
(10.100.0.0/21) found. mds (on this host) and osd (on other hosts) bind 
to 0.0.0.0 and a public IP, respectively.


At this point public/cluster addr/network are WAY overspecified in 
ceph.conf, but the problem appeared with far less specification.


Any ideas? Thanks,

Jeff
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw swift java jars

2014-01-13 Thread raj kumar
Thanks.  there is one more jclouds(https://github.com/jclouds/jclouds).

would like to know anybody used it.


On Mon, Jan 13, 2014 at 9:08 PM, Chmouel Boudjnah wrote:

> fyi: there is this http://javaswift.org/ as well.
>
> Chmouel.
>
>
> On Mon, Jan 6, 2014 at 7:05 PM, raj kumar wrote:
>
>> Hi, could find all necessary jars required to run the java program.  Is
>> there any place to get all jars for both swift and s3? Thanks.
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Peering or connections problem

2014-01-13 Thread Mordur Ingolfsson
Hi List,

I have a Ceph setup consisting of 3 nodes, 1 mon and 2 osd.  It seems
that both my osds are in but down. the osd processes on the osd nodes
exist and are listening and I am able to successfully telnet to - and
from all nodes on the ports to and from all other nodes on the
respective ports. Still, my pgs are all stuck in this status:

pg 2.2 is stuck unclean since forever, current state creating, last
acting []


Here is my ceph.config:

http://pastebin.com/SpQA38Em

and here is what what 'ceph report'  has to say.

http://pastebin.com/3gPJhpnH


This is what the osd logs show:

2014-01-13 17:23:32.638235 7f98486a7700  5 osd.0 16 tick
2014-01-13 17:23:32.638270 7f98486a7700 10 osd.0 16 do_waiters -- start
2014-01-13 17:23:32.638273 7f98486a7700 10 osd.0 16 do_waiters -- finish
2014-01-13 17:23:32.657880 7f9836e63700 20 osd.0 16 update_osd_stat
osd_stat(1057 MB used, 29646 MB avail, 30704 MB total, peers []/[] op
hist [])
2014-01-13 17:23:32.657935 7f9836e63700  5 osd.0 16 heartbeat:
osd_stat(1057 MB used, 29646 MB avail, 30704 MB total, peers []/[] op
hist [])
2014-01-13 17:23:33.638437 7f98486a7700  5 osd.0 16 tick
2014-01-13 17:23:33.638475 7f98486a7700 10 osd.0 16 do_waiters -- start
2014-01-13 17:23:33.638479 7f98486a7700 10 osd.0 16 do_waiters -- finish
2014-01-13 17:23:33.758194 7f9836e63700 20 osd.0 16 update_osd_stat
osd_stat(1057 MB used, 29646 MB avail, 30704 MB total, peers []/[] op
hist [])
2014-01-13 17:23:33.758257 7f9836e63700  5 osd.0 16 heartbeat:
osd_stat(1057 MB used, 29646 MB avail, 30704 MB total, peers []/[] op
hist [])
2014-01-13 17:23:34.638658 7f98486a7700  5 osd.0 16 tick
2014-01-13 17:23:34.638692 7f98486a7700 10 osd.0 16 do_waiters -- start
2014-01-13 17:23:34.638694 7f98486a7700 10 osd.0 16 do_waiters -- finish
2014-01-13 17:23:35.638936 7f98486a7700  5 osd.0 16 tick
.
.
.


and this is what the mon log says:

2014-01-13 17:25:21.670754 7f10474b4700 11 mon.ceph0@0(leader) e1 tick
2014-01-13 17:25:21.670792 7f10474b4700 10 mon.ceph0@0(leader).pg v8 v8:
192 pgs: 192 creating; 0 bytes data, 0 kB used, 0 kB / 0 kB avail
2014-01-13 17:25:21.670821 7f10474b4700 10 mon.ceph0@0(leader).mds e1
e1: 0/0/1 up
2014-01-13 17:25:21.670831 7f10474b4700 10 mon.ceph0@0(leader).osd e7
e7: 2 osds: 0 up, 2 in
2014-01-13 17:25:21.670839 7f10474b4700 20 mon.ceph0@0(leader).osd e7
osd.0 laggy halflife 3600 decay_k -0.000192541 down for 5.000466 decay
0.999038
2014-01-13 17:25:21.670876 7f10474b4700 10 mon.ceph0@0(leader).osd e7
tick entire containing rack subtree for osd.0 is down; resetting timer
2014-01-13 17:25:21.670881 7f10474b4700 20 mon.ceph0@0(leader).osd e7
osd.1 laggy halflife 3600 decay_k -0.000192541 down for 5.000466 decay
0.999038
2014-01-13 17:25:21.670890 7f10474b4700 10 mon.ceph0@0(leader).osd e7
tick entire containing rack subtree for osd.1 is down; resetting timer
2014-01-13 17:25:21.670895 7f10474b4700  1
mon.ceph0@0(leader).paxos(paxos active c 1..260) is_readable
now=2014-01-13 17:25:21.670896 lease_expire=0.00 has v0 lc 260
2014-01-13 17:25:21.670917 7f10474b4700  1
mon.ceph0@0(leader).paxos(paxos active c 1..260) is_readable
now=2014-01-13 17:25:21.670918 lease_expire=0.00 has v0 lc 260
2014-01-13 17:25:21.670927 7f10474b4700  1
mon.ceph0@0(leader).paxos(paxos active c 1..260) is_readable
now=2014-01-13 17:25:21.670928 lease_expire=0.00 has v0 lc 260
2014-01-13 17:25:21.670934 7f10474b4700 10 mon.ceph0@0(leader).log v36 log
2014-01-13 17:25:21.670939 7f10474b4700 10 mon.ceph0@0(leader).auth v207
auth
2014-01-13 17:25:21.670951 7f10474b4700 20 mon.ceph0@0(leader) e1
sync_trim_providers

This is what a 'ps aux| grep ceph | grep ceph'  yields on  each
respective node:

mon.0

root  6567  0.0  0.4 156984 13084 ?Sl   04:41   0:07
/usr/bin/ceph-mon -i ceph0 --pid-file /var/run/ceph/mon.ceph0.pid -c
/etc/ceph/ceph.conf

osd.0
root  3435  0.0  0.6 488344 20140 ?Ssl  04:41   0:26
/usr/bin/ceph-osd -i 0 --pid-file /var/run/ceph/osd.0.pid -c
/etc/ceph/ceph.conf

osd.1
root  2926  0.0  0.6 487080 18912 ?Ssl  04:41   0:29
/usr/bin/ceph-osd -i 1 --pid-file /var/run/ceph/osd.1.pid -c
/etc/ceph/ceph.conf

This is what 'netstat  -tapn | grep -i listen | grep ceph'  yields on
each respective node:

mon.0
tcp0  0 192.168.10.200:6789 0.0.0.0:*  
LISTEN  6567/ceph-mon  

osd.0
tcp0  0 10.10.10.201:6800   0.0.0.0:*  
LISTEN  3435/ceph-osd  
tcp0  0 192.168.10.201:6800 0.0.0.0:*  
LISTEN  3435/ceph-osd  
tcp0  0 192.168.10.201:6801 0.0.0.0:*  
LISTEN  3435/ceph-osd  
tcp0  0 10.10.10.201:6801   0.0.0.0:*  
LISTEN  3435/ceph-osd  
tcp0  0 192.168.10.201:6802 0.0.0.0:*  
LISTEN  3435/ceph-osd

osd.1
tcp0  0 10.10.10.202:6800   0.0.0.0:*  
LISTEN  2926/ceph-osd  
tcp0  0 192.168.10.202:6800 0.0.0.0:*

Re: [ceph-users] radosgw swift java jars

2014-01-13 Thread Chmouel Boudjnah
fyi: there is this http://javaswift.org/ as well.

Chmouel.


On Mon, Jan 6, 2014 at 7:05 PM, raj kumar  wrote:

> Hi, could find all necessary jars required to run the java program.  Is
> there any place to get all jars for both swift and s3? Thanks.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Problem in getting response from radosgw using swift api with Range header specified

2014-01-13 Thread dalchand choudhary
I am facing a problem in requesting ceph radosgw using swift api.
Connection is getting closed after reading 512 bytes from stream. This
problem is only occurring if I send a GET object request with range header.

Here is the request and response:

Request--->

GET http://rgw.mydomain.com/swift/v1/user_1/test.webm HTTP/1.1
X-Auth-Token: ***
Range: 0-

Response--->

HTTP/1.1 206 Partial Content
Date: Mon, 13 Jan 2014 05:58:34 GMT
Server: Apache/2.2.22 (Ubuntu)
Content-Range: bytes 0-0/11760217
Accept-Ranges: bytes
Last-Modified: Wed, 08 Jan 2014 10:03:33 GMT
etag: 836fe9e58b2961c8ff34d7d8fe0fc530
Content-Length: 11760217
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: video/webm

Also I can see Content-Range header is incorrect. Is it a knows issue or am
I missing something.

I'm using HttpClient 4.3.1 (Apache) for making requests and the error it
throws is :

Exception in thread "main" org.apache.http.ConnectionClosedException:
Premature end of Content-Length delimited message body (expected:
11760217; received: 524288
at 
org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:180)
at 
org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
at TestSwiftAPI.main(TestSwiftAPI.java:51)


-- 
Dal Chand Choudhary
Member of Technical Team
LT Research
Bangalore, India.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph start error

2014-01-13 Thread Alfredo Deza
On Sun, Jan 12, 2014 at 8:55 AM, You, Rong  wrote:
> Hi,
>
> When I use the ceph.conf as follows:
>
> [mon]
>
> mon data = /var/lib/ceph/mon/ceph-$id
>
> [mon.test-a]
>
> host = test-a
>
> mon addr = 10.239.82.163:6789
>
> [mon.test-b]
>
> host = test-b
>
> mon addr = 10.239.82.95:6789
>
> #MDS#
>
> [mds]
>
> keyring = /etc/ceph/keyring.$name
>
>
>
>
>
> OSD
>
> [osd]
>
>osd data = /mnt/osd/osd$id
>
> osd journal size = 0
>
> filestore xattr use omap = true
>
> keyring = /etc/ceph/keyring.$name
>
> osd mkfs type = xfs
>
> osd mount options xfs = rw
>
> osd mkfs options xfs = -f
>
>
>
> Then execute command:
>
> mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/keyring.admin –mkfs
>
> It will returns:
>
> temp dir is /tmp/mkcephfs.ZqK0QIxgiF
>
> preparing monmap in /tmp/mkcephfs.ZqK0QIxgiF/monmap
>
> /usr/bin/monmaptool --create --clobber --add test-a 10.239.82.163:6789 --add
> test-b 10.239.82.95:6789 --print /tmp/mkcephfs.ZqK0QIxgiF/monmap
>
> /usr/bin/monmaptool: monmap file /tmp/mkcephfs.ZqK0QIxgiF/monmap
>
> /usr/bin/monmaptool: generated fsid fc99b196-b136-4a3b-89c5-59dcca57d8ab
>
> epoch 0
>
> fsid fc99b196-b136-4a3b-89c5-59dcca57d8ab
>
> last_changed 2014-01-13 05:54:36.615483
>
> created 2014-01-13 05:54:36.615483
>
> 0: 10.239.82.95:6789/0 mon.test-b
>
> 1: 10.239.82.163:6789/0 mon.test-a
>
> /usr/bin/monmaptool: writing epoch 0 to /tmp/mkcephfs.ZqK0QIxgiF/monmap (2
> monitors)
>
> \nWARNING: mkcephfs is now deprecated in favour of ceph-deploy. Please see:
> \n http://github.com/ceph/ceph-deploy
>
> Building generic osdmap from /tmp/mkcephfs.ZqK0QIxgiF/conf
>
> /usr/bin/osdmaptool: osdmap file '/tmp/mkcephfs.ZqK0QIxgiF/osdmap'
>
> /usr/bin/osdmaptool: writing epoch 1 to /tmp/mkcephfs.ZqK0QIxgiF/osdmap
>
> Generating admin key at /tmp/mkcephfs.ZqK0QIxgiF/keyring.admin
>
> creating /tmp/mkcephfs.ZqK0QIxgiF/keyring.admin
>
> Building initial monitor keyring
>
> cat: /tmp/mkcephfs.ZqK0QIxgiF/key.*: No such file or directory
>
> \nWARNING: mkcephfs is now deprecated in favour of ceph-deploy. Please see:
> \n http://github.com/ceph/ceph-deploy
>
>
>
>
>
> Then runs command: service ceph –a start
>
> It also hang on during ceph-create-keys
>
>
>
> === mon.test-a ===
>
> Starting Ceph mon.test-a on test-a...
>
> failed: 'ulimit -n 131072;  /usr/bin/ceph-mon -i test-a --pid-file
> /var/run/ceph/mon.test-a.pid -c /etc/ceph/ceph.conf '
>
> Starting ceph-create-keys on test-a...
>
> === mon.test-b ===
>
> Starting Ceph mon.test-b on test-b...
>
> failed: 'ssh test-b ulimit -n 131072;  /usr/bin/ceph-mon -i test-b
> --pid-file /var/run/ceph/mon.test-b.pid -c /etc/ceph/ceph.conf '
>
> Starting ceph-create-keys on test-b...
>
>
>
> In addition, all the command execute on test-a. And am I do something wrong
> during configuration or else?
>
> From: Ирек Фасихов [mailto:malm...@gmail.com]
> Sent: Saturday, January 11, 2014 10:29 PM
>
>
> To: You, Rong
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] ceph start error
>
>
>
> You need to add [mon.NAME] in the configuration file.
>
>
>
> Example:
>
> [mon.kvm01]
>
> mon_addr = 192.168.100.1:6789
>
> host = kvm01
>
>
>
> [mon.kvm03]
>
> mon_addr = 192.168.100.3:6789
>
> host = kvm03
>
>
>
> [mon.kvm02]
>
> mon_addr = 192.168.100.2:6789
>
> host = kvm02
>
>
>
>
>
> 2014/1/11 You, Rong 
>
> But I have put the ceph.conf in the directory /etc/ceph/. How can I resolve
> it?
>
>
>
> From: Ирек Фасихов [mailto:malm...@gmail.com]
> Sent: Saturday, January 11, 2014 10:24 PM
> To: You, Rong
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] ceph start error
>
>
>
> global_init: unable to open config file from search list /etc/ceph/ceph.conf
>
>
>
> 2014/1/11 You, Rong 
>
> Hi,
>
>  I encounter a problem when startup the ceph cluster.
>
>  When run the command: service ceph -a start,
>
> The process always hang up. The error result is:
>
>
>
> [root@test-c ceph]# service ceph -a start
>
> === mon.0 ===
>
> Starting Ceph mon.0 on test-c...
>
> failed: 'ulimit -n 131072;  /usr/bin/ceph-mon -i 0 --pid-file
> /var/run/ceph/mon.0.pid -c /etc/ceph/ceph.conf '
>
> Starting ceph-create-keys on test-c...
>
> === mon.1 ===
>
> Starting Ceph mon.1 on test-d...
>
> global_init: unable to open config file from search list /etc/ceph/ceph.conf
>
> failed: 'ssh test-d ulimit -n 131072;  /usr/bin/ceph-mon -i 1 --pid-file
> /var/run/ceph/mon.1.pid -c /etc/ceph/ceph.conf '
>
> Starting ceph-create-keys on test-d...
>
>
>
> The process hangs and wait. Anybody know the reason for these? And how can I
> solve such problem?

You are using a deprecated tool (mkcephfs) that is no longer supported
and it is known to have issues
that are not being fixed. This is also present in the log ouput you pasted.

If you could try with ceph-deploy (the supported tool) we might be
able to help you out if something
is not working right.



>
>
>
>
> ___
> ceph-users mailing list

Re: [ceph-users] 3 node setup with pools size=3

2014-01-13 Thread Wolfgang Hennerbichler


On 01/13/2014 12:39 PM, Dietmar Maurer wrote:
> I am still playing around with a small setup using 3 Nodes, each running
> 4 OSDs (=12 OSDs).
> 
>  
> 
> When using a pool size of 3, I get the following behavior when one OSD
> fails:
> * the affected PGs get marked active+degraded
> 
> * there is no data movement/backfill

Works as designed, if you have the default crush map in place (all
replicas must be on DIFFERENT hosts). You need to tweak your crush map
in this case, but be aware that this can have serious effects (think of
all your data residing on 3 disks on a single host).

Wolfgang


-- 
http://www.wogri.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] 3 node setup with pools size=3

2014-01-13 Thread Dietmar Maurer
I am still playing around with a small setup using 3 Nodes, each running 4 OSDs 
(=12 OSDs).

When using a pool size of 3, I get the following behavior when one OSD fails:

* the affected PGs get marked active+degraded
* there is no data movement/backfill

Note: using 'ceph osd crush tunables optimal'

2014-01-13 12:04:30.983293 mon.0 10.10.10.201:6789/0 278 : [INF] osd.0 marked 
itself down
2014-01-13 12:04:31.071366 mon.0 10.10.10.201:6789/0 279 : [INF] osdmap e1109: 
12 osds: 11 up, 12 in
2014-01-13 12:04:31.208211 mon.0 10.10.10.201:6789/0 280 : [INF] pgmap v40630: 
512 pgs: 458 active+clean, 54 stale+active+clean; 0 bytes data, 43026 MB used, 
44648 GB / 44690 GB avail
2014-01-13 12:04:32.184767 mon.0 10.10.10.201:6789/0 281 : [INF] osdmap e1110: 
12 osds: 11 up, 12 in
2014-01-13 12:04:32.274588 mon.0 10.10.10.201:6789/0 282 : [INF] pgmap v40631: 
512 pgs: 458 active+clean, 54 stale+active+clean; 0 bytes data, 43026 MB used, 
44648 GB / 44690 GB avail
2014-01-13 12:04:35.869090 mon.0 10.10.10.201:6789/0 283 : [INF] pgmap v40632: 
512 pgs: 458 active+clean, 54 stale+active+clean; 0 bytes data, 35358 MB used, 
44655 GB / 44690 GB avail
2014-01-13 12:04:36.918869 mon.0 10.10.10.201:6789/0 284 : [INF] pgmap v40633: 
512 pgs: 406 active+clean, 21 stale+active+clean, 85 active+degraded; 0 bytes 
data, 14550 MB used, 44676 GB / 44690 GB avail
2014-01-13 12:04:38.013886 mon.0 10.10.10.201:6789/0 285 : [INF] pgmap v40634: 
512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 4903 MB used, 
44685 GB / 44690 GB avail
2014-01-13 12:06:35.971039 mon.0 10.10.10.201:6789/0 286 : [INF] pgmap v40635: 
512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 4903 MB used, 
44685 GB / 44690 GB avail
2014-01-13 12:06:37.054701 mon.0 10.10.10.201:6789/0 287 : [INF] pgmap v40636: 
512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 4903 MB used, 
44685 GB / 44690 GB avail
2014-01-13 12:09:35.336782 mon.0 10.10.10.201:6789/0 288 : [INF] osd.0 out 
(down for 304.265855)
2014-01-13 12:09:35.367765 mon.0 10.10.10.201:6789/0 289 : [INF] osdmap e: 
12 osds: 11 up, 11 in
2014-01-13 12:09:35.441982 mon.0 10.10.10.201:6789/0 290 : [INF] pgmap v40637: 
512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 836 MB used, 
40965 GB / 40966 GB avail
2014-01-13 12:09:36.481087 mon.0 10.10.10.201:6789/0 291 : [INF] osdmap e1112: 
12 osds: 11 up, 11 in
2014-01-13 12:09:36.26 mon.0 10.10.10.201:6789/0 292 : [INF] pgmap v40638: 
512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 836 MB used, 
40965 GB / 40966 GB avail
2014-01-13 12:09:37.582968 mon.0 10.10.10.201:6789/0 293 : [INF] osdmap e1113: 
12 osds: 11 up, 11 in
2014-01-13 12:09:37.677104 mon.0 10.10.10.201:6789/0 294 : [INF] pgmap v40639: 
512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 836 MB used, 
40965 GB / 40966 GB avail
2014-01-13 12:09:40.898194 mon.0 10.10.10.201:6789/0 295 : [INF] pgmap v40640: 
512 pgs: 392 active+clean, 120 active+degraded; 0 bytes data, 837 MB used, 
40965 GB / 40966 GB avail
2014-01-13 12:09:41.940257 mon.0 10.10.10.201:6789/0 296 : [INF] pgmap v40641: 
512 pgs: 419 active+clean, 3 active+remapped, 90 active+degraded; 0 bytes data, 
838 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:09:43.044860 mon.0 10.10.10.201:6789/0 297 : [INF] pgmap v40642: 
512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes 
data, 839 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:11:40.923450 mon.0 10.10.10.201:6789/0 298 : [INF] pgmap v40643: 
512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes 
data, 839 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:11:42.007022 mon.0 10.10.10.201:6789/0 299 : [INF] pgmap v40644: 
512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes 
data, 838 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:11:43.082319 mon.0 10.10.10.201:6789/0 300 : [INF] pgmap v40645: 
512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes 
data, 838 MB used, 40965 GB / 40966 GB avail

The PGs remain in degraded stat now - but why (data should move to remaining 
OSDs)?

The failed osd.0 is still in the crush map. If I remove the osd.0 I get 
'active+clean' state.

2014-01-13 12:18:48.663686 mon.0 10.10.10.201:6789/0 303 : [INF] osdmap e1115: 
11 osds: 11 up, 11 in
2014-01-13 12:18:48.793016 mon.0 10.10.10.201:6789/0 304 : [INF] pgmap v40647: 
512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes 
data, 838 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:18:49.686309 mon.0 10.10.10.201:6789/0 305 : [INF] osdmap e1116: 
11 osds: 11 up, 11 in
2014-01-13 12:18:49.759685 mon.0 10.10.10.201:6789/0 306 : [INF] pgmap v40648: 
512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes 
data, 838 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:18:52.995909 mon.0 10.10.10.201:6789/0 307 : [INF] pgmap v40649: 
512 pgs: 463 active+clean, 13 active+remapped, 36 active+degraded; 0 bytes 
data, 

Re: [ceph-users] radosgw setting puplic ACLs fails.

2014-01-13 Thread Wido den Hollander

On 11/27/2013 10:43 PM, Yehuda Sadeh wrote:

I just pushed a fix for review for the s3cmd --setacl issue. It should
land a stable release soonish.



So this is commit 14cf4caff58cc2c535101d48c53afd54d8632104 right?

It says:

Fixes: #6892
Backport: dumpling, emperor

But it never got backported to Dumpling 0.67.5. I now manually cherry 
picked it on a system of mine and building.


The issue: http://tracker.ceph.com/issues/6892

Wido


Thanks,
Yehuda

On Wed, Nov 27, 2013 at 10:12 AM, Shain Miley  wrote:

Derek,
That's great...I am hopeful it makes it into the next release too...it will 
solve several issues we are having, trying to working around radosgw bucket and 
object permissions when there are multiple users writing files to our buckets.

And with the 's3cmd setacl' failing...at this point I don't see too many other 
alternatives for us.

Thanks again,

Shain

Shain Miley | Manager of Systems and Infrastructure, Digital Media | 
smi...@npr.org | 202.513.3649


From: Derek Yarnell [de...@umiacs.umd.edu]
Sent: Wednesday, November 27, 2013 11:21 AM
To: Shain Miley
Cc: de...@umiacs.umd.edu; ceph-users
Subject: Re: [ceph-users] radosgw setting puplic ACLs fails.

On 11/26/13, 3:31 PM, Shain Miley wrote:

Micha,

Did you ever figure out a work around for this issue?

I also had plans of using s3cmd to put, and recursively set acl's on a nightly 
basis...however we are getting the 403 errors as well during our testing.

I was just wondering if you were able to find another solution.


Hi,

There is code[1] in the master branch (I am not sure but I hope it will
make it into the next stable release, it is not in 0.72.x) that allows
you defer to the bucket ACLs.  defer_to_bucket_acls is the configurable
which allows for two different modes.  Recurse just propagates the
specific bucket acls to all the keys, it does fall through to the key
ACL if the bucket ACL doesn't apply.  Full_control allows someone with
FULL_CONTROL at the bucket level to do whatever they want to the keys
(including replace the whole ACL), and again falls through to the key ACL.

Note this breaks AWS S3 compatibility and is why it is a configurable.

[1] - https://github.com/ceph/ceph/pull/672

Thanks,
derek

--
Derek T. Yarnell
University of Maryland
Institute for Advanced Computer Studies


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to increase filesizelimit ?

2014-01-13 Thread Markus Goldberg

Hi Sage,
hopefully two last questions:
How to delete an existing mds ? The ceph-manual says: "Coming soon"
I tried:
  root@bd-a:/# ceph-deploy mds destroy bd-0 bd-1 bd-2
  [ceph_deploy.cli][INFO  ] Invoked (1.3.4): /usr/bin/ceph-deploy mds 
destroy bd-0 bd-1 bd-2

  [ceph_deploy.mds][ERROR ] subcommand destroy not implemented
  root@bd-a:/#

I think the existing data is lost during destroying the mds.
Is it possible to create a second mds beside the existing and copy the 
data from one to the other internally or do i have to save the data to 
an external repository ?


Thank you,
  Markus
Am 09.01.2014 13:21, schrieb Sage Weil:

I take it back: it is encoded in teh MDSMap, so it can be changed for an
existing file system, except for teh fact that the monitor doesn't yet
have a command to do it.  I can put a patch together to do that.  In the
meantime, you *can* set the limit on cluster creation by adding

  mds max file size = 100

(or whatever) to your ceph.conf before creating the monitors.

sage


On Thu, 9 Jan 2014, Markus Goldberg wrote:


Hi Sage,
that sounds good.

Thank you very much,
   Markus
Am 09.01.2014 13:10, schrieb Sage Weil:

   Hi Markus,

   There is a compile time limit of 1 tb per file in cephfs.  We
   can increase that pretty easily.  I need to check whether it can
   be safely switched to a configurable...

   sage



   Markus Goldberg  wrote:

Hi,
i want to use my ceph for a backup-repository holding virtual-tapes.
When i copy files from my existing backup-system to ceph all files are
cut at 1TB. The biggest files are arounf 5TB for now.
So i'm afraid, that the actual filesizelimit is set to 1TB.
How can I increase this limit ?
Can this be done without losing existing data ?

Thank you very much,
Markus




Markus Goldberg   Universit?t Hildesheim
Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldb...@uni-hildesheim.de






ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Sent from Kaiten Mail. Please excuse my brevity.



--
MfG,
   Markus Goldberg

--
Markus Goldberg   Universit?t Hildesheim
   Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldb...@uni-hildesheim.de
--




--
MfG,
  Markus Goldberg

--
Markus Goldberg   Universität Hildesheim
  Rechenzentrum
Tel +49 5121 88392822 Marienburger Platz 22, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldb...@uni-hildesheim.de
--


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] FW: doubt if it's a defect in ceph_rest_api.py or i did anything wrong...

2014-01-13 Thread Gao, Wei M
Hi, All

 

I runned ceph-rest-api  and send a getting request to /api/v0.1 but got an
internal server error. The log of ceph-rest-api is attached inline below:

2014-01-13 07:41:14,128 __main__ INFO: http://10.239.149.9:5050/api/v0.1
from 10.239.206.107 Mozilla/5.0 (Windows NT 6.2; WOW64; rv:26.0)
Gecko/20100101 Firefox/26.0

2014-01-13 07:41:14,145 __main__ ERROR: Exception on /api/v0.1 [GET]

Traceback (most recent call last):

  File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1504, in
wsgi_app

response = self.full_dispatch_request()

  File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1264, in
full_dispatch_request

rv = self.handle_user_exception(e)

  File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1262, in
full_dispatch_request

rv = self.dispatch_request()

  File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1248, in
dispatch_request

return self.view_functions[rule.endpoint](**req.view_args)

  File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 418, in
handler

helptext = show_human_help(prefix)

  File "/usr/lib/python2.7/dist-packages/ceph_rest_api.py", line 292, in
show_human_help

line.append(permmap[cmdsig['perm']])

KeyError: u'w'

 

For now, I got around the issue by changing the following line in
ceph_rest_api.py:

permmap = {'r':'GET', 'rw':'PUT'}> permmap = {'r':'GET', 'rw':'PUT',
'w':'PUT'}

 

I want to check if this is a defect in ceph_rest_api.py or I did anything
wrong?

 

Best Regards

Wei



smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] radosgw swift java jars

2014-01-13 Thread raj kumar
Thank you.


On Thu, Jan 9, 2014 at 12:51 PM, Wido den Hollander  wrote:

> On 01/09/2014 06:37 AM, raj kumar wrote:
>
>> Thank you for your time.  Some would have used already the suitable java
>> swift api for CS.  Would like to hear from them.
>>
>>
> This is not a part of the Ceph project, so I'm not able to help you with
> that.
>
> You should contact the rackerlabs people abut this.
>
> Wido
>
>  I tried using, https://github.com/rackerlabs/java-cloudfiles. Since it
>> requires lot of other deps, the java program fails. I'm getting,
>>
>> java -classpath lib/*:apache-log4j-1.2.16/log4j-1.2.16.jar:. MyExample
>> log4j:WARN No appenders could be found for logger
>> (com.rackspacecloud.client.cloudfiles.FilesUtil).
>> log4j:WARN Please initialize the log4j system properly.
>> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
>> more info.
>> Exception in thread "main" java.lang.IllegalArgumentException: timeout
>> can't be negative
>>  at java.net.Socket.setSoTimeout(Socket.java:1091)
>>  at
>> org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(
>> PlainSocketFactory.java:116)
>>  at
>> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(
>> DefaultClientConnectionOperator.java:178)
>>  at
>> org.apache.http.impl.conn.AbstractPoolEntry.open(
>> AbstractPoolEntry.java:144)
>>  at
>> org.apache.http.impl.conn.AbstractPooledConnAdapter.open(
>> AbstractPooledConnAdapter.java:131)
>>  at
>> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(
>> DefaultRequestDirector.java:610)
>>  at
>> org.apache.http.impl.client.DefaultRequestDirector.execute(
>> DefaultRequestDirector.java:445)
>>  at
>> org.apache.http.impl.client.AbstractHttpClient.doExecute(
>> AbstractHttpClient.java:863)
>>  at
>> org.apache.http.impl.client.CloseableHttpClient.execute(
>> CloseableHttpClient.java:82)
>>  at
>> org.apache.http.impl.client.CloseableHttpClient.execute(
>> CloseableHttpClient.java:106)
>>  at
>> org.apache.http.impl.client.CloseableHttpClient.execute(
>> CloseableHttpClient.java:57)
>>  at
>> com.rackspacecloud.client.cloudfiles.FilesClient.login(
>> FilesClient.java:288)
>>  at MyExample.main(MyExample.java:22)
>>
>> and java program is:
>>
>> import java.io.*;
>> import java.io.File;
>> import java.util.List;
>> import java.util.Map;
>> import com.rackspacecloud.client.cloudfiles.FilesClient;
>> import com.rackspacecloud.client.cloudfiles.FilesConstants;
>> import com.rackspacecloud.client.cloudfiles.FilesContainer;
>> import com.rackspacecloud.client.cloudfiles.
>> FilesContainerExistsException;
>> import com.rackspacecloud.client.cloudfiles.FilesObject;
>> import com.rackspacecloud.client.cloudfiles.FilesObjectMetaData;
>> import org.apache.http.*;
>>
>> public class MyExample {
>>public static void main(String a[]){
>>
>>String username = "raj:swift";
>>String password = "PJpKiW4XVjC534czbbgexOS6/TEpso6p306ML9KZ";
>>String authUrl  = "http://192.168.211.71/auth";;
>>
>>try {
>>  FilesClient client = new FilesClient(username, password,
>> authUrl);
>>  if (!client.login()) {
>>  throw new RuntimeException("Failed to log in");
>>  }
>>
>>  client.createContainer("my-new-javacontainer");
>>
>>  File file = new File("foo.txt");
>>  String mimeType = FilesConstants.getMimetype("txt");
>>  client.storeObject("my-new-javacontainer", file, mimeType);
>>
>>  List objects =
>> client.listObjects("my-new-javacontainer");
>>  for (FilesObject object : objects) {
>>  System.out.println("  " + object.getName());
>>  }
>>} catch(HttpException | IOException e) {
>>  System.out.println("error");
>>}
>>
>>
>>   }
>> }
>>
>>
>>
>> On Tue, Jan 7, 2014 at 12:45 PM, raj kumar > > wrote:
>>
>> I meant could not find required jar files to run java swift program.
>>
>>
>> On Mon, Jan 6, 2014 at 11:35 PM, raj kumar > > wrote:
>>
>> Hi, could find all necessary jars required to run the java
>> program.  Is there any place to get all jars for both swift and
>> s3? Thanks.
>>
>>
>>
>>
>
> --
> Wido den Hollander
> 42on B.V.
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com