Hello,
If you followed the rsyslog instructions in the SAIO, then the proxy logs
will be in /var/log/swift/proxy.log and proxy.error. If not, then it will
be either in /var/log/syslog or /var/log/messages, depending on your server
distro.
--
Chuck
On Tue, Jul 16, 2013 at 4:57 AM, CHABANI
Swift stores object metadata in the xattrs of the file on disk and XFS
stores xattrs in the inodes. When swift was first developed, there were
performance issues with using the default inode size in XFS, and led to us
recommending to change the inode size when creating XFS filesystems.
In the
Hey Mark,
On Wed, May 22, 2013 at 8:59 AM, Mark Brown ntdeveloper2...@yahoo.comwrote:
Thank you for the responses Chuck.
As part of a rebalance, the replicator, I would assume, copies the object
from the old partition to the new partition, and then deletes it from the
old partition. Is that
On Wed, May 22, 2013 at 7:54 PM, Mark Brown ntdeveloper2...@yahoo.comwrote:
Thanks Chuck.
Just one more question about rebalancing. Have there been measurements on
how much it affects performance when a rebalance is in progress? I would
assume its an operation that puts some load on the
The important lines are the **FINAL** lines (the others are just to print
status, so you did 1000 PUTS at 9.1 PUTs per second average, 52.9 per
second for DEL GET and 7.6 for DEL.
--
Chuck
On Tue, Apr 16, 2013 at 3:28 PM, Sujay M sujay@gmail.com wrote:
Hi all,
Can you please let me know
to the TC I will continue to support these
ideals. I deeply care for Openstack and its future success, so please
consider me for this position.
Thanks,
--
Chuck Thier
@creiht
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack
Hi Giuseppe,
The first thing you can do is use the swift-get-nodes utility to find
out where those objects would normally be located. In your case it
will look something like:
swift-get-nodes /etc/swift/object.ring.gz AUTH_ACCOUNTHASH images
8ab06434-5152-4563-b122-f293fd9af465
Of course
Howdy,
The scripts are generated when setup.py is run (either as `setup.py
install` or `setup.py develop`
--
Chuck
On Mon, Feb 11, 2013 at 11:02 AM, Kun Huang academicgar...@gmail.com wrote:
Hi, swift developers
I found the script /usr/local/bin/swift is:
#!/usr/bin/python -u
#
Hi John,
It would be difficult to recommend a specific drive, because things
change so often. New drives are being introduced all the time.
Manufacturers buy their competition and cancel their awesome products.
So the short answer is that you really need to test the drives out in
your
.
Maybe worth pasting our config over here?
Thanks in advance.
alejandro
On 12 Jan 2013 02:01, Chuck Thier cth...@gmail.com wrote:
Looking at this from a different perspective. Having 2500 partitions
per drive shouldn't be an absolutely horrible thing either. Do you
know how many objects
Hi Leander,
The following assumes that the cluster isn't in production yet:
1. Stop all services on all machines
2. Format and remount all storage devices
3. Re-create rings with the correct partition size
4. Push new rings out to all servers
5. Start services back up and test.
--
Chuck
Hey Leander,
Can you post what performance you are getting? If they are all
sharing the same GigE network, you might also check that the links
aren't being saturated, as it is pretty easy to saturate pushing 200k
files around.
--
Chuck
On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
on those values to get to the end result.
Currently I'm resetting swift with a node size of 64, since 90% of the files
are less than 70KB in size. I think that might help.
On Mon, Jan 14, 2013 at 4:34 PM, Chuck Thier cth...@gmail.com wrote:
Hey Leander,
Can you post what performance you
you recommend
another approach?
On Mon, Jan 14, 2013 at 4:43 PM, Chuck Thier cth...@gmail.com wrote:
Using swift stat probably isn't the best way to determine cluster
performance, as those stats are updated async, and could be delayed
quite a bit as you are heavily loading the cluster
On Mon, Jan 14, 2013 at 11:01 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
I currently have 4 machines running 10 clients each uploading 1/40th of the
data. More than 40 simultaneous clientes starts to severely affect
Keystone's ability to handle these operations.
You might also
On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
Also, I'm unable to run the swift-bench with keystone.
Hrm... That was supposed to be fixed with this bug:
https://bugs.launchpad.net/swift/+bug/1011727
My keystone dev instance isn't working at the moment,
by the way.
On Mon, Jan 14, 2013 at 5:18 PM, Chuck Thier cth...@gmail.com wrote:
On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
Also, I'm unable to run the swift-bench with keystone.
Hrm... That was supposed to be fixed with this bug:
https
are producing any logs. I'm not sure what
to do :S
On Mon, Jan 14, 2013 at 6:50 PM, Chuck Thier cth...@gmail.com wrote:
You would have to look at the proxy log to see if a request is being
made. The results from the swift command line are just the calls that
the client makes. The server
.
*
*
*
*
*Alejandro Comisario
#melicloud CloudBuilders*
Arias 3751, Piso 7 (C1430CRG)
Ciudad de Buenos Aires - Argentina
Cel: +549(11) 15-3770-1857
Tel : +54(11) 4640-8443
On Mon, Jan 14, 2013 at 1:23 PM, Chuck Thier cth...@gmail.com wrote:
Hi Alejandro,
I really doubt that partition size
and
distribution to higher level of Swift.
[image: Inactive hide details for Chuck Thier ---2012-12-20 上午
12:35:58---Chuck Thier cth...@gmail.com]Chuck Thier ---2012-12-20 上午
12:35:58---Chuck Thier cth...@gmail.com
*Chuck Thier cth...@gmail.com*
Sent by: openstack-bounces+zhuadl=cn.ibm
There are a couple of things to think about when using RAID (or more
specifically parity RAID) with swift.
The first has already been identified in that the workload for swift
is very write heavy with small random IO, which is very bad for most
parity RAID. In our testing, under heavy workloads,
The metadata for objects is stored at the object level, not in the
container dbs. Reporting metadata information for container listings
would require the server to HEAD every object in the container, which
would cause too much work on the backend.
--
Chuck
On Wed, Dec 12, 2012 at 7:01 AM,
Top posting to give some general history and some thoughts.
Snapshots, as implemented currently in cinder, are derived from the
EBS definition of a snapshot. This is more of a consistent block
level backup of a volume. New volumes can be created from any given
snapshot. This is *not* usually
Hi Javier,
On Tue, Nov 6, 2012 at 5:07 AM, Javier Fontan jfon...@opennebula.org wrote:
Hello,
We recently had interest from some of our enterprise users to use
Swift Object Store as the backend for the VM images. I have been
researching on a possible integration with OpenNebula but I have
Hey Vish,
First, thanks for bringing this up for discussion. Coincidentally a
similar discussion had come up with our teams, but I had pushed it
aside at the time due to time constraints. It is a tricky problem to
solve generally for all hypervisors. See my comments inline:
On Mon, Aug 13,
We currently have a large deployment that is based on nova-volume as it is
in trunk today, and just ripping it out will be quite painful. For us,
option #2 is the only suitable option.
We need a smooth migration path, and time to successfuly migrate to Cinder.
Since there is no clear migration
On Wed, Jun 20, 2012 at 12:16 PM, Jay Pipes jaypi...@gmail.com wrote:
On 06/20/2012 11:52 AM, Lars Kellogg-Stedman wrote:
A strategy we are making in Nova (WIP) is to allow instance
termination no matter what. Perhaps a similar strategy could be
adopted for volumes too? Thanks,
The
Hey Chmouel,
The first easy step would be to by default not start the aux services
(like replication). And if someone wants to test those, they can run
them manually (similarly to how we do dev with the saio).
--
Chuck
On Mon, May 14, 2012 at 10:17 AM, Chmouel Boudjnah chmo...@chmouel.com
Hi Sally,
I don't know if we have the code for the original rings, but gholt has
a good series of blog posts that hits on several of the different
stages we went through when designing the ring in swift:
http://www.tlohg.com/p/building-consistent-hashing-ring.html
--
Chuck
2012/4/19 Sally Cong
Some general notes for consistency and swift (all of the below assumes
3 replicas):
Objects:
When swift PUTs an object, it attempts to write to all 3 replicas
and only returns success if 2 or more replicas were written
successfully. When a new object is created, it has a fairly strong
Hi Fabrice,
The design of Swift has always assumed that the backend services are
running on a secured, private network. If this is not going to be the
case, or you would like to provide more security on that network, a
lot more work needs to be done than just rsync. That said, I don't
think it
Howdy,
In general Nginx is really good, and we like it a lot, but it has one
design flaw that causes it to not work well with swift. Nginx spools
all requests, so if you are getting a lot large (say 5GB) uploads, it
can be problematic. In our testing a while ago, Pound proved to have
the best
Hi Mark,
I just wanted to clarify to the reasoning why we use POST for metadata
modification in Swift. In general I totally agree that PUT/POST
should be used for creation (PUT when you know the identification of
the representation, POST when you do not). And PUT should be used
when modifying
Hi,
Each container server sqlite db is replicated to 3 of your container
nodes. Container replication (which operates a bit differently than
object replication) ensures that they stay in sync. The container
nodes can be run either on the same nodes as your storage nodes, or on
separate nodes.
Taking a bit of a step back, it seems to me that the biggest thing
that prevents us from using a pure github workflow is the absolute
requirement of a gated trunk. Perhaps a better question to ask
weather or not this should be an absolute requirement. For me, it is
a nice to have, but shouldn't
Hi Caitlin,
Right now the best source of what S3 features are available through
the S3 compatibility layer are here:
http://swift.openstack.org/misc.html#module-swift.common.middleware.swift3
--
Chuck
On Fri, Sep 2, 2011 at 2:59 PM, Caitlin Bestler
caitlin.best...@nexenta.com wrote:
Joshua
.
That said, with the prereqs met, both can probably be used to mount a new
volume.
Reasonable?
On Jul 20, 2011, at 5:27 PM, Chuck Thier cth...@gmail.com wrote:
Yeah, I think you are illustrating how this generates much confusion :)
To try to be more specific, the base functionality should
, and leads to
much better grammar...
On Thu, Jul 21, 2011 at 12:19 PM, Chuck Thier cth...@gmail.com wrote:
Hey Andi,
Perhaps it would be better to re-frame the question.
What should the base functionality of the Openstack API for
backup/snapshot functionality be?
I'm looking at it from
I would like to see one way CHAP support added to Nova Volume. Not a
whole lot more to add, but would be interested in any feedback.
Blueprint: https://blueprints.launchpad.net/nova/+spec/isci-chap
Spec: http://etherpad.openstack.org/iscsi-chap
--
Chuck
are kind of interchangable. This is quite
confusing, perhaps we should refer to them as:
partial-snapshot
whole-snapshot
or something along those lines that conveys that one is a differencing image
and one is a copy of the entire object?
On Jul 20, 2011, at 12:01 PM, Chuck Thier wrote
volume service.
I believe that this new direction will ensure a bright future for storage
in Nova, and look forward to continuing to work with everyone in making this
possible.
Sincerely,
Chuck Thier (@creiht)
Lunr Team Lead
___
Mailing list: https
this mean in terms of APIs? Will there be a separate Volume
API? Will volumes be embedded in the compute API?
-jOrGe W.
On Jul 8, 2011, at 10:40 AM, Chuck Thier wrote:
Openstack Community,
Through the last few months the Lunr team has learned many things. This
week, it has become clear to us
as nova
API extensions, and that still makes sense, but will there be a separate,
independent block service and API?
Erik
From: Chuck Thier cth...@gmail.com
Date: Fri, 8 Jul 2011 13:15:56 -0500
To: Jorge Williams jorge.willi...@rackspace.com
Cc: openstack@lists.launchpad.net openstack
for these pointers (and maybe other results from the design summit,
which alas I missed) ?
a.
On Tue, Jun 14, 2011 at 2:05 PM, Chuck Thier cth...@gmail.com wrote:
Hi Andi,
There was the initial blue print at:
https://blueprints.launchpad.net/nova/+spec/integrate-block-storage
And the notes
Hey Soren,
We've asked similar questions before :)
Ever since the packaging was pulled out of the source tree, we have been
mostly out of the packaging loop. Since then most of the packaging details
have been handled by monty and you.
We build our own packages for production, so we have mostly
Hi Thomas,
The swift-init thing is just a useful tool that we use to manage the
services in dev, and while at one time we had init scripts, our ops guys
just started using the swift-init tool out of convenience.
That said, it should be easy to create other init scripts. The format for
starting
On Mon, May 2, 2011 at 2:45 PM, Eric Windisch e...@cloudscaling.com wrote:
On May 2, 2011, at 12:50 PM, FUJITA Tomonori wrote:
Hello,
Chuck told me at the conference that lunr team are still working on
the reference iSCSI target driver design and a possible design might
exploit
We have no current plans to make an iSCSI target for swift. Not only would
there be performance issues, but also consistency issues among other things.
For Lunr, swift will only be a target for backups from block devices.
I think some of this confusion stems from the confusion around snapshots,
service. It is also undecided
if
this should be a publicly available api, or just used by backend
services.
The exports endpoint is the biggest change that we are proposing, so we
would
like to solicit feedback on this idea.
--
Chuck Thier (@creiht
be at separate
endpoints?
In other words am i creating a volume with a PUT /
provider.com/high-perf-volumes/account/volumes/
or just a /provider.com/account/volumes/ and a X-High-Perf header ?
Vish
On Apr 22, 2011, at 2:40 PM, Chuck Thier wrote:
One of the first steps needed to help decouple volumes
to the Openstack community?
Are we able to discuss this at next week's design summit?
We are working on getting the following done before the design summit
next week:
* Choose a project name - DONE (Lunr)
* Identify a project lead - DONE (Chuck Thier @creiht)
* Set up a project
installation would be to follow the
all in one instructions (http://swift.openstack.org/development_saio.html).
This will allow you to also run the suite of functional tests.
--
Chuck
On Tue, Apr 5, 2011 at 4:18 AM, Thomas Goirand z...@debian.org wrote:
On 04/05/2011 05:27 AM, Chuck Thier wrote
Hi Jon,
I'm not familiar with the C# bindings, but the fact that you can do
some operations sounds promising. A 503 return code from the server means
that something wrong happened server side, so you might check the server
logs to see if they provide any useful information. Another useful test
I also worked on swift. Can you have a look? I'm not so sure what I did
is fully correct yet, because I didn't succeed in running everything
fully. It seems that swift doesn't like using device-mapper as
partitions, is that correct? Which leads me to reinstall my test server
from
Now that I have had a chance to look into this a little more, I realize
that I had missed the connection of talking about the volume functionality
missing in the Openstack Rest API (versus the EC2 api that was already
there).
Justin: Sorry I'm a bit late to the game on this, but is there a
as an extension short term seems like a good
idea. Whatever it takes to get something in there, so we can get our
hands dirty and experiment some more.
Adam
On Wed, Mar 23, 2011 at 3:53 AM, Chuck Thier cth...@gmail.com wrote:
Hi Adam,
We have just begun an RD effort for building a scalable block
The problem with this logic is that you are optimizing wrong. In a token
based auth system, the tokens are valid generally for a period of time (24
hours normally with Rackspace auth), and it is a best practice to cache
this. Saying that you are reducing HTTP requests for 1 request that has to
57 matches
Mail list logo