Wed, Jul 13, 2016 at 12:18 PM, Shrinand Javadekar
wrote:
> Hi,
>
> I am trying to understand the mechanism used by Swift to determine
> which storage node to send a GET request to.
>
> I have a single node setup with 4 disks: r1, r2, r3 and r4. For a
> given container and ob
Hi,
I am trying to understand the mechanism used by Swift to determine
which storage node to send a GET request to.
I have a single node setup with 4 disks: r1, r2, r3 and r4. For a
given container and object-name, the swift-get-nodes -a output shows
the following:
Server:Port Device 127.0.0.1:6
I was able to reproduce the issue with some manual intervention on the
same 1 node setup.
1. Using swift-get-nodes, I found the exact order of nodes in which
Swift was going to attempt to write an object.
2. Then I manually unmounted the primary and first handoff disk.
3. Then I wrote the object u
>
> I think in a four device single node single replica setup I'd probably just
> run request_node_count = 4 and call it a day.
I'll give this a shot right away.
But there are two questions that remain unanswered.
1. Why is there a discrepancy in the way writes vs reads are handled?
Isn't reques
Here's my test setup:
- Single node
- Single replica
- 4 disks: /srv/node/r1, r2, r3 and r4.
- Backed by SSDs
Unfortunately, I don't have the logs when the object was first
written. But can definitely say that it returned 201. This is done
using an application (not manually). We also logged that
d
>> path.
Yes, I was running on a single replica system. The object was *only*
found in the second handoff node (expected I guess because num
replicas as 1). The original PUT request returned SUCCESS. I'd try to
read the object iff the original PUT succeeded.
On Tue, May 24, 2016 at
Thanks for the detailed explanation...
>>
>>
>> 1. So when the replicator catches up, it will move the object back to
>> the correct location. Is that right?
>
>
> The read path will find the object on any primary or any handoff location.
> The replicator *will* copy the data files to the primary
wrote:
> On 24/05/16 11:20, Clay Gerrard wrote:
>>
>>
>> On Mon, May 23, 2016 at 1:49 PM, Shrinand Javadekar
>> mailto:shrin...@maginatics.com>> wrote:
>>
>>
>> If objects are placed on different devices than the computed ones,
>> t
Mark Kirkwood
wrote:
> On 21/05/16 05:27, Shrinand Javadekar wrote:
>>
>> Hi,
>>
>> I am troubleshooting a test setup where Swift returned a 201 for
>> objects that were put in it but later when I tried to read it, I got
>> back 404s.
>>
>> The sys
Hi,
I am troubleshooting a test setup where Swift returned a 201 for
objects that were put in it but later when I tried to read it, I got
back 404s.
The system has been under load. I see lots of connection errors,
lock-timeouts, etc. However, I am not sure if ever Swift should be
returning a 404.
Hi,
Based on the "Doubling Performance in Swift with No Code Changes" talk
at the Openstack Summit, I decided to give running Swift on PyPy a
shot. I configured a VM with Swift (largely based on the steps
mentioned in [1], although I did have to change a few things). Swift
was up and running. The
Hi,
I am trying to test the latest hummingbird code. However, I ran into
issues even while getting the basics to work.
The last commit I have is this one:
commit adcd49a481cc4f4752e2e43ec5e5724687f44945
Merge: 7c9fc6d 9d3d2dc
Author: Jenkins
Date: Tue Oct 20 20:41:20 2015 +
Merge "go
Missed out an important detail:
My HTTP request generator, Swift proxy server and Swift object server
are all on the same machine. Network latency itself shouldn't be high.
On Mon, Sep 21, 2015 at 9:57 PM, Shrinand Javadekar
wrote:
> Hi,
>
> I am trying to dig deeper into where
Hi,
I am trying to dig deeper into where time is spent during a PUT
request. I added a timer (datetime.datetime) at the start and end of
proxy/controllers/obj.py:PUT(). I am seeing the time reported here to
be in the range of 100ms - 800ms. I believe this also includes the
time required to do the
Hi,
I'm seeing the following errors in my syslog about account.recon not
being present. Any ideas why this file did not get created in the
first place?
Sep 17 23:11:43 machine-name object-server: message repeated 3 times:
[ Error reading recon cache file: #012Traceback (most recent call
last):#01
nswers.launchpad.net/swift/+question/156307
>
> http://stackoverflow.com/questions/28379809/how-are-hash-collisions-handled
>
>
> Anthony.
>
> -----Original Message-
> From: Shrinand Javadekar [mailto:shrin...@maginatics.com]
> Sent: Wednesday, August 26, 2015 1:37 P
Actually, I'm confused now. I used to think that Swift does HTTP
deletes by synchronously truncating the object file and renaming it
with a .ts extension. But the currently code simply creates a new file
with the request timestamp and .ts extension.
On Wed, Aug 26, 2015 at 1:37 PM, Shr
Hi,
I have a question about how object deletes are handled with md5
collisions. I looked at the code and here's my understanding of how
things will work.
If I have two objects that have the same md5 hash, they will go to the
same hash directory. Say, they go to
/srv/node/r1/object/1024/eef/deadbe
maybe you can try more than 100 partitions
> and see if that is still an issue. Without looking at your ring options etc.
> It is hard to make a call.
>
> Inviato da iPhone
>
>> Il giorno 10/ago/2015, alle ore 12:49, Shrinand Javadekar
>> ha scritto:
>>
>>
Hi,
I have a Swift setup with 8 disks of 3TB each. I went by the suggested
config of having over 100 partitions per disk. However, with this I'm
see that performance to be really slow.
I reduced the number of partitions to about 8 per disk and the
performance has gone up by almost 5x.
I realize
>> http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsprogs.git;a=blob;f=mkfs/xfs_mkfs.c;h=5084d755;hb=HEAD#l688
>>
>> Might be a good idea to do some benchmarking with different AG numbers?
>
>
> Could be useful, but we should first get Swift to not dump everything in the
> same AG. Otherwise, th
I was able to make the code change to create the tmp directory in the
3-byte hash directory and fix the unit tests to get this to work. I
will file a bug to get a discussion started on this, in case there are
people not following this thread.
On Wed, Apr 29, 2015 at 4:08 PM, Shrinand Javadekar
Hi,
I have been investigating a pretty serious Swift performance problem
for a while now. I have a single node Swift instance with 16 cores,
64GB memory and 8 MDs of 3TB each. I only write 256KB objects into
this Swift instance with high concurrency; 256 parallel object PUTs.
Also, I was sharding
Hi,
I observe that while placing data, the object server creates a
directory structure:
/srv/node/r0/objects//<3 byte hash suffix>//.data.
Is there a reason for the directory to be created? Couldn't
this just have been
/srv/node/r0/objects//<3 byte hash suffix>/.data?
I am seeing a situation w
ed with 8, but
didn't see too much difference. Analysis done using sysdig suggests
that CPU is the bottleneck; not disk.
I'll take a deeper look at this with htop and see what's happening.
-Shri
P.S. "tanstaafl": Knew the phrase; but learnt the acronym just now...
Lea
On Fri, Apr 3, 2015 at 3:23 AM, wrote:
> Are you using SSL (https)?
Nope. SSL is disabled.
>
>
> On Thu, 2 Apr 2015, Shrinand Javadekar wrote:
>
>> Top shows the CPUs pegged at ~100%. Writes are done by a tool built
>> in-house which is similar in function
lso how are you doing the object writes to benchmark it? Are you using dd?
>
> On 3 April 2015 at 09:50, Yogesh Girikumar wrote:
>>
>> What does top say?
>>
>> On 3 April 2015 at 02:34, Shrinand Javadekar
>> wrote:
>>>
>>> Hi,
>>>
&g
Hi,
I have a single node Swift instance. It has 16 cpus, 8 disks and 64GB
memory. As part of testing, I am doing 256 object writes in parallel
for ~10 mins. Each object is also 256K bytes in size.
While my experiment is running, I see that the CPU utilization of the
box is always ~100%. I am tryi
Hi,
I am seeing two types of errors by the object-replicator. I am running
Swift with a replication factor of 2. Every PUT request will be
required to wait till both copies of the data are written. Therefore,
I'd expect the replicator to not be doing too much work :-).
However, I see these rsync
>
> Is there some sample code for how to drop the buffer cache in python.
> Presumably this will be for each file and not the entire buffer cache.
>
> The tests I ran were in a VM. I can run it on hardware with spinning
> disks underneath to get more accurate numbers.
I ran these tests on a physic
Thanks for the reply Sam. Some comments below.
> Maybe(TM). It depends on your workload. When you make inodes bigger, fewer
> of them fit in the kernel's buffer cache, possibly resulting in more work.
> On the other hand, when you make them smaller, then you always get xattrs
> spilled to extents.
Hi,
I wrote a small microbenchmark for measuring the performance of
extended attributes in XFS. In the experiment, I wrote 100K files,
each with extended attributes. In one experiment, XFS was formatted
with the default inode size of 256 bytes. In the other experiment, it
was formatted with an ino
Hi,
I am seeing several LockTimeout errors on the account-server. I have a
single node Swift instance with 8 disks. I have a single account and
it is using 128 containers.
Mar 19 23:15:53 machine-name account-server: ERROR __call__ error with
PUT /r7/118/AUTH_pepumr/mag-1426694897-vwmsyb-110 : Lo
g solved by tombstones at the object layer
> - except this is for container replication/consistency - which is needed for
> the container api requests and updates to the account layer.
>
> -Clay
>
> On Tue, Mar 10, 2015 at 11:14 AM, Shrinand Javadekar
> wrote:
>>
>> Hi,
Hi,
I see that Swift creates an index on the object table on the columns
(deleted, name) in the container server. Where part of Swift queries
the database for names where 'deleted = true'?
I thought when an object is deleted, Swift synchronously truncates the
file and renames it with a .ts extens
domain).
>
> There is a "knob" in Swift you can turn to configure this choice: overload.
> See the docs linked above for info on it. Also, The swift-ring-builder also
> now includes a "dispersion" command so you can see if your cluster is set up
> to have overweight
Hi,
I have a question about using unevenly sized disks in a Swift cluster
configured to do 2x replication.
Let's say I start with two disks of 10GB each and configure the rings
so that both the disks have the same weight (say 10). In this case,
one replica of each object will be on each device.
Hi,
I am exploring an option where the physical hardware on which Swift
will be installed can have an nvram. Has anyone explored putting the
Swift container and account dbs on nvram? Any good/bad experiences
that you can share?
The physical server does not have SSDs and therefore the option is to
Thanks Clay!
On Fri, Dec 5, 2014 at 1:02 PM, Clay Gerrard wrote:
> On Fri, Dec 5, 2014 at 11:47 AM, Shrinand Javadekar
> wrote:
>>
>>
>> If it is less than N, the swift-drive-audit tool could potentially
>> unmount an already recovered drive.
>>
>>
Hi,
The Openstack Swift admin guide talks about the swift-drive-audit tool
for detecting failed drives and unmounting then. It says that this
tool should be setup to run as periodic cronjob.
I have a question about configuring this correctly so as:
1) Detect failures and unmount the failed drive
Hi,
I had a discussion about container updates in swift-on-file on the IRC
channel a few days ago [1].
Turns out that the current swift-on-file code does update the
container db after PUTs. The previous version, called gluster-swift
wasn't updating the container db after PUTs.
I want to try disa
Hi,
I have a single node Swift cluster with a replica count of 1. I
recently found that writes to this instance were becoming incredibly
slow due to two types of timeouts:
1) container-server timeout when trying to lock the db.
2) object-server timeout when connecting to the container-server
(sav
[Un]Modified-Since header (see
> http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html).
>
> --John
>
>
>
>
>
> On Aug 8, 2014, at 1:18 PM, Shrinand Javadekar
> wrote:
>
>> Hi,
>>
>> I have a question regarding the way object overwrites work in
Hi,
I have a question regarding the way object overwrites work in the
absence of versioning. I couldn't find this info in the documentation.
Consider the case when I have an object O already present in the Swift
cluster. There are N replicas of this object. When a new PUT request
that overwrites
(Speakers: Shrinand Javadekar )
https://www.openstack.org/vote-paris/Presentation/openstack-swift-as-a-high-throughput-scalable-secure-file-system-backend
As always, we appreciate your support!
-Shri
On Sat, Aug 2, 2014 at 9:25 PM, Gary Kotton wrote:
> Hi,
> Feel free to share. :)
> Thank
Thanks for the responses Sam, John!
On Wed, Jul 30, 2014 at 11:11 AM, John Dickinson wrote:
>
> On Jul 30, 2014, at 10:57 AM, Samuel Merritt wrote:
>
>> On 7/30/14, 10:18 AM, Shrinand Javadekar wrote:
>>> Hi,
>>>
>>> Swift v1 allowed for geo replicati
Hi,
Swift v1 allowed for geo replication using read and write affinity
rules. Now, Swift v2 allows setting storage policies (which can affect
replication) per container. I wanted to know if/how these two
intersect. Some of the following are straight-forward questions, just
wanted to get a confirma
> Hope this helps clear things up.
This does. Thanks for the detailed explanation.
-Shri
>
> --John
>
>
>
>
>
>
> On Jul 22, 2014, at 10:05 AM, Shrinand Javadekar
> wrote:
>
>> This is confusing. So does this mean semantic versioning applies to
>&
t; itself.
> Anne
>
>
> On Tue, Jul 22, 2014 at 12:46 AM, Shrinand Javadekar
> wrote:
>>
>> Hi,
>>
>> Swift has been following the semantic versioning scheme. The fact that
>> the product version changed from v1.x to v2.0 should suggest that the
>>
Hi,
Swift has been following the semantic versioning scheme. The fact that
the product version changed from v1.x to v2.0 should suggest that the
Swift APIs changed in this release.
I see that storage policies has been the biggest change in this
release. Has that impacted the APIs?
Is there a doc
Hugo,
What's the goal of this exercise? And what exactly is the kind of
analysis you have in mind?
Thanks in advance.
-Shri
On Sun, Jul 20, 2014 at 10:28 PM, Kuo Hugo wrote:
> Hi Folks,
>
> I'd like to investigate all third-party clients for OpenStack Swift.
> The requirement is the client too
Thanks for your inputs Edward and Pete. I'll set sysctl net.ipv4.tcp_tw_reuse.
On Fri, Jul 11, 2014 at 8:05 AM, Pete Zaitcev wrote:
> On Tue, 8 Jul 2014 16:26:10 -0700
> Shrinand Javadekar wrote:
>
>> I see that these servers do not use a persistent http connection
>&
Any ideas folks?
On Tue, Jul 8, 2014 at 4:26 PM, Shrinand Javadekar
wrote:
> Hi,
>
> I have a question about the http connections made between the various
> swift server processes. Particularly between the swift proxy server
> and the swift object server.
>
> I see that the
Hi,
I have a question about the http connections made between the various
swift server processes. Particularly between the swift proxy server
and the swift object server.
I see that these servers do not use a persistent http connection
between them. So every blob get/put/delete request will creat
e network latency
> will have impact on your object PUT request.
>
> -Edward
>
>
> [image: Inactive hide details for Shrinand Javadekar ---2014-06-25 上午
> 01:14:07---Shrinand Javadekar ]Shrinand
> Javadekar ---2014-06-25 上午 01:14:07---Shrinand Javadekar <
> shrin...@maginat
eploying your own front-end?
>
>
> *Adam Lawson*
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 702
> Int'l: +1-302-268-6914 ext. 702
> Cell: +1-916-990-1226
>
>
>
> On Tue, J
p to you to choose a service
>> discovery method:
>>
>> Geo-DNS
>> Anycast IP address
>> Unique DNS name per location
>> etc
>>
>> Michael
>>
>>
>>
>>
>>
>> On Mon, Jun 23, 2014 at 9:29 PM, Shrinand Javadekar
>&g
t; not consistent to tell client
> it succeed.
>
> -Edward Zhang
>
>
> [image: Inactive hide details for Shrinand Javadekar ---2014-06-24 下午
> 03:12:14---Shrinand Javadekar ]Shrinand
> Javadekar ---2014-06-24 下午 03:12:14---Shrinand Javadekar <
> shrin...@maginatics.co
Hi,
I have a single node swift cluster. I measured the time taken to
complete a PUT request that originated from three different client
machines. Each client was writing a single 256K byte object.
Note that the time measured was only the time taken on the Swift
cluster itself. I started the timer
Geo-DNS for Keystone servers and each Keystone server returns the local
> Swift endpoint.
> 3. Let user to switch which region of Swift endpoint would they like to use.
>
>
> Hope it help
>
>
> 2014-06-24 8:38 GMT+08:00 Shrinand Javadekar :
>>
>> Hi,
>>
>
Hi,
I am trying to understand the notion of "regions" in Swift. To start
with, it's kinda confusing that the notion of "region" in Keystone is
not exactly the same as that of Swift. So I could authenticate with
Keystone, get a Swift endpoint for a region (Keystone's notion of a
region) and write/r
d.net/swift/+spec/tracing-tool>
> *
> http://www.slideshare.net/zhanghare/swift-distributed-tracing-method-and-tools-v2*
> <http://www.slideshare.net/zhanghare/swift-distributed-tracing-method-and-tools-v2>
>
> Any feedback is welcomed!
>
> -Edward Zhang
>
>
> [imag
Hi,
I am looking at the Swift codebase and stumbled upon something interesting.
Several functions in the swift code base have a "@timing_stats()"
annotation. Does this provide a way for profiling a swift cluster? Is
it possible to get stats for individual get/put requests in Swift? If
so, it'll b
controller
> and drives actually honor the barriers, and things just start to get messy.
> :)
>
> --
> Chuck
>
>
> On Mon, Apr 21, 2014 at 1:13 PM, Shrinand Javadekar
> wrote:
>>
>> Hi,
>>
>> I notice that the recommended way of deploying Swift
Hi,
I notice that the recommended way of deploying Swift is to use XFS on
the storage nodes. This XFS volume is mounted using the "nobarriers"
option.
If I'm not wrong, Swift does an fsync after every put to make sure
that the object is written to disk. But in the absence of barriers
this isn't g
Can you make sure that memcached is running on your machine? If not,
make sure you install and run it.
Also, I believe it is required that memcached be started first before
starting swift. You might have to do:
$ sudo service memcached stop
$ swift-init all stop
$ sudo service memcached start
$ s
I get an "500 Internal Error" message and stack.sh fails :(.
On Tue, Apr 8, 2014 at 5:26 PM, John Griffith
wrote:
> Use enabled_services in your local.conf file, something like:
>
> ENABLED_SERVICES=g-api,g-reg,key
>
> Might work
>
>
>
> On Tue, Apr 8
Hi,
I want to run devstack with just Glance (and Keystone because Glance
requires Keystone I guess). My localrc is pasted below. However, when
stack.sh completes, I don't see glance running. I looked at the
catalog returned by keystone and the only service reported by keystone
is the "identity ser
Hi,
I recently came across an object store that could disable container
listing and thereby give better performance. By disable, it meant that
a call to list the entries in a container would simply return an empty
status code of 200 without any object names.
I guess this is possible only if the e
. With
sharding objects across containers, I am getting ~50MB/s (800
objects/second of 64KB each).
-Shri
[1] http://rackerlabs.github.io/swift-ppc/
On Mon, Mar 3, 2014 at 10:36 AM, Shrinand Javadekar
wrote:
> One of the options that a colleague of mine came up with is related to
>
. can I try and put many objects into a single directory
by making N = 1. This will reduce the amount of work done when a
single object is written.
What do you think?
-Shri
On Sat, Mar 1, 2014 at 2:25 PM, Shrinand Javadekar
wrote:
> Hi,
>
> I have single node Swift instance running in a VM.
Hi,
I have single node Swift instance running in a VM. It has: 4 cores, 16
GB memory and 300GB SSD disk
I want to get the best possible throughput from this Swift instance
when, say 100 clients are writing data concurrently. Are there any
recommendations to achieve this?
So far, I've tried the f
se will be
> 1.13. We have a release of Swift near the end of the six-month OpenStack
> cycle that is included in the integrated release. For example, OpenStack
> Havana included Swift 1.10.
>
> I hope this clears things up.
>
> --John
>
>
>
> On Feb 20, 2014, at
Hi,
I know that Swift releases do not necessarily coincide with major
Openstack releases (like Grizzly, Havana, etc.).
1) Does Swift have a fixed release cadence?
2) Also, I see that the current releases are all versioned as 1.XY.
Does this mean that these are minor releases and some major Swift
I've already filed a bug based on the conversation on the irc channel:
https://bugs.launchpad.net/keystone/+bug/1273831
On Tue, Jan 28, 2014 at 12:57 PM, Adam Young wrote:
> On 01/27/2014 01:30 PM, Shrinand Javadekar wrote:
>>
>> Hi,
>>
>> I am seeing a diff
Hi,
I am seeing a difference in the values returned by Keystone when a
user is authenticated. These differences are in the endpoints section
of the serviceCatalog.
In one instances, I see the returned value has an "id":
"serviceCatalog": [
{
"endpoints": [
{
doesn't seem like it should work.
>
> -Clay
>
>
> On Thu, Jan 23, 2014 at 4:12 PM, Shrinand Javadekar <
> shrin...@maginatics.com> wrote:
>
>> Yes, swauth.
>>
>>
>> On Thu, Jan 23, 2014 at 1:52 PM, Clay Gerrard wrote:
>>
&g
Yes, swauth.
On Thu, Jan 23, 2014 at 1:52 PM, Clay Gerrard wrote:
> Is SwiftAuth... like Swauth?
>
> https://github.com/gholt/swauth/search?q=SwiftAuth&ref=cmdform
>
> or something else???
>
>
> On Thu, Jan 23, 2014 at 10:44 AM, Shrinand Javadekar <
> shrin
Hi,
I am trying to debug a swift auth problem. There are two swift clusters
using SwiftAuth for authentication.
On one cluster, when the client wants to authenticate, I see a GET request
being sent to:
http://swift.domain.com/v1.0/v1.0
along with the user-name and password. It receives a 200 OK
seem
helpful.
On Mon, Jan 13, 2014 at 10:46 AM, Pete Zaitcev wrote:
> On Fri, 10 Jan 2014 15:25:02 -0800
> Shrinand Javadekar wrote:
>
> > I see that the proxy-server already has a "workers" config option.
> However,
> > looks like that is the # of threads in one pr
Hi,
This question is specific to Openstack Swift. I am trying to understand
just how much is the proxy server a bottleneck when multiple clients are
concurrently trying to write to a swift cluster. Has anyone done
experiments to measure this? It'll be great to see some results.
I see that the pro
only focus on one test that fail.
>
> Cheers,
>
> Fabien Boucher
> OpenStack Engineer
> eNovance SaS - 10 rue de la Victoire 75009 Paris - France
>
> - Original Message -----
> From: "Paul E Luse"
> To: "Shrinand Javadekar" ,
> openst
I thought I must have been asked before, but couldn't find any
reference to it. So here it goes:
I have cloned the git repository for Swift locally (on MacOS). I
wanted to play with some code and see if it breaks any unit tests.
When I run tox -e py27, the entire test suite is executed. What's the
Why don't you use the swift command line client. That's easier to work
with than using curl.
http://swiftstack.com/docs/integration/python-swiftclient.html
On Thu, Dec 5, 2013 at 3:58 AM, pragya jain wrote:
> hello all,
>
> I had installed swift using link
> http://docs.openstack.org/developer/s
I had similar questions earlier. You might find [1] useful.
As John mentioned, under highly concurrent workloads the container
itself can become a bottleneck and sharding across containers can help
speed things up. Also, as containers grow in size, the sqlite database
keeping information about the
>
> Jonathan Lu
>
>
> On 2013/11/21 3:05, Shrinand Javadekar wrote:
>>
>> TempURLs are generate for a specific object (file). So if you're
>> filename is "foo", generate the tempurl as:
>>
>> $ swift-temp-url PUT 300 /v1/AUTH_blah/container/fo
TempURLs are generate for a specific object (file). So if you're
filename is "foo", generate the tempurl as:
$ swift-temp-url PUT 300 /v1/AUTH_blah/container/foo
With the genearated url, only file "foo" will be allowed to be PUT.
-Shri
On Wed, Nov 20, 2013 at 1:31 AM, Jonathan Lu wrote:
> Hi,
before setting up
>> this.
>>
>> Instructions in the following link may help you to setup havana
>> cloud-archive then upgrade your swift to havana.
>>
>> http://docs.openstack.org/havana/install-guide/install/apt/content/basics-packages.html
>>
>
Are there any ways I can try debugging/troubleshooting this?
-Shri
On Fri, Oct 25, 2013 at 10:19 PM, Shrinand Javadekar <
shrin...@maginatics.com> wrote:
> Hi,
>
> My attempt to upgrade my 3 node swift installation from v1.9.1 to v1.10.0
> fails without any errors :(. I downl
Hi,
My attempt to upgrade my 3 node swift installation from v1.9.1 to v1.10.0
fails without any errors :(. I downloaded the tar ball from
https://launchpad.net/swift/havana/1.10.0/+download/swift-1.10.0.tar.gz and
then followed these steps:
1. Stopped the services running on my storage nodes usin
go upto 10M or so.
Will keep you'll posted.
-Shri
On Wed, Oct 9, 2013 at 11:11 PM, Shrinand Javadekar > wrote:
>
>> Thanks Chuck.
>>
>> In order to really measure this, I ran some tests on Rackspace; i.e. I
>> got a VM on Rackspace and that VM was talking to a
te:
> On 10/9/13 8:28 PM, Shrinand Javadekar wrote:
>
>> Hi,
>>
>> Objects in a swift container can be deleted by either explicitly
>> deleting them or by setting a expiry timestamp on them. Is there a
>> performance difference between the two? For example, when
issue if you are not putting objects in at a high concurrency.
>
> --
> Chuck
>
>
> On Sun, Sep 1, 2013 at 9:39 PM, Shrinand Javadekar <
> shrin...@maginatics.com> wrote:
>
>> Hi,
>>
>> There have been several articles which talk about keepin
t-object-expirer daemon running?
>
> Mvh / Best regards
> Morten Møller Riis
> Gigahost ApS
> m...@gigahost.dk
>
>
>
>
> On Oct 10, 2013, at 2:28 PM, Shrinand Javadekar
> wrote:
>
> Hi,
>
> Objects in a swift container can be deleted by either explicitly del
Hi,
Objects in a swift container can be deleted by either explicitly deleting
them or by setting a expiry timestamp on them. Is there a performance
difference between the two? For example, when I want to delete an object,
instead of deleting it, can I simply set the X-Delete-After attribute of
tha
Hi,
There have been several articles which talk about keeping the number of
objects in a container to about 1M. Beyond that sqlite starts becoming the
bottleneck. I am going to make sure we abide by this number.
However, has anyone measured whether putting objects among multiple
containers right
96 matches
Mail list logo