E id = 1
> DELETE FROM table WHERE id = 1
>
>
> This one would not decrement, if record was already deleted...
>
> I know this is a bit overhead on the database, but is far more trivial
> than change callbacks ordering...
>
> On Tue, Oct 16, 2012 at 7:15 PM, Brian Durand
UTC-7, Michael Koziarski wrote:
>
> I don't think it's reasonable to force pessimistic locks on every single
> destroy call, my point is more that in your case it's a work around.
>
> --
> Cheers,
>
> Koz
>
> On Wednesday, 17 October 2012 at 7:06 AM,
that don't support row locking, but you're
using such a database you'd likely have other issues in a high concurrency
situation like is needed to produce this issue.
On Saturday, October 13, 2012 10:47:15 AM UTC-7, Brian Durand wrote:
>
> I've put a stand alone script
e a bug. If we can reproduce and
> attach that to an issue it could help the discussion.
>
> --
> Richard Schneeman
> http://heroku.com
> @schneems <http://twitter.com/schneems>
>
> On Friday, October 12, 2012 at 7:53 AM, Brian Durand wrote:
>
> I've
I've been looking into consistency problem with association counter caches
where the counter cache value in the database is not consistent with the
actual number of records in the association. What I've found is that it is
from a concurrency issue where two process try to destroy the same record
I opened a pull request on my changes:
https://github.com/rails/rails/pull/7800.
I updated the code a bit and fixed the tests so it is slightly different
than the branch referenced above. It should now be backward compatible with
older cache entries as well.
As for the Array idea, that would c
nt someone from implementing an optimized memcache backend if they
have special needs for performance.
On Saturday, September 22, 2012 1:55:54 PM UTC-7, Michael Koziarski wrote:
>
> On Sunday, 23 September 2012 at 12:32 AM, Xavier Noria wrote:
>
> On Fri, Sep 21, 2012 at 8
The Entry model does provide one other feature which I think also warrants
keeping it around and which is even potentially more valuable than the
race_condition_ttl. It allows the cache to store nil values.
If you have a statement like this:
record = Rails.cache.fetch("my_cache_key"){
Model.
I've also just found a use case where I'd like to unsubscribe the default
subscribers. I'd like to turn on INFO level logging for my application to
get some better visibility into the running production application.
However, the application gets a ton of traffic (100's of requests/second)
and w
I Googled around quite a bit looking for information and could only
find some off hand references from April that maybe moving to GitHub
would be a good idea. Do you think someone could add a post to the
Rails blog explaining the move so it can be publicized a little more?
--
You received this me
So what's the plan for the tickets that are in Lighthouse? Is there
any plan to migrate them to GitHub or will they have to be re-created
manually?
--
You received this message because you are subscribed to the Google Groups "Ruby
on Rails: Core" group.
To post to this group, send email to rubyo
I have submitted a patch that fixes an issue with deserializing errors
returned from the server for ActiveResource records. The code for
deserializing looks for the humanized name in the error messages to
determine which errors belong to which attribute. However, when
attributes start with the same
I've created a patch to add a new session store that is backed by
ActiveSupport::Cache::Store. It has the same functionality as the
existing MemCacheStore except it is far more flexible in that it can
be backed by any ActiveSupport::Cache::Store (i.e. DalliStore,
RedisStore, MongoStore, etc).
Ligh
> > I would advocate
> > that Rails should use something like the lumberjack gem (http://
> > github.com/bdurand/lumberjack) in place of BufferedLogger. This sort
> > of architecture would provide Rails with a standard, supported logging
> > interface, and allow for better log formats and a simpler
> I was under the possibly mistaken impression that with the exception
> of our log initialization code, we only relied on the public API for
> the Logger class from ruby's stdlib. I've used both Logger.new and
> sys log logger without any issues, what did you have to monkeypatch?
> That definitel
> The short version is that logging showed up in hello world benchmarks
> and as a result some optimisation, of questionable real world utility,
> took place. Buffering up a bunch of writes into a single write does
> make a difference though and it seems mostly harmless.
I've been working quite e
I have added three new Lighthouse tickets.
1. Fix ActiveSupport::Cache::FileStore#cleanup to not be completely
broken (https://rails.lighthouseapp.com/projects/8994/tickets/6308)
2. Update Rails Guide to accurately describe the cache store changes
in 3.0 (https://rails.lighthouseapp.com/projects/
I've submitted a couple of patches in Ligththouse.
Provide NoStore implementation of ActiveSupport::Cache::Store
This patch provides a NoStore implementation of
ActiveSupport::Cache::Store suitable for use in development and test
environments where the code need to use the cache interface, but
a
I found a couple of incompatibilities between Arel and ActiveRecord
where ActiveRecord handles a database structure but Arel doesn't so
that calling MyModel.first works fine, but MyModel.find does not.
The first problem is with non-standard SQL column types like PostGIS
geometry columns on a table
In Rails 3 the only way to get for content captured with the
content_for method in a view is to call yield from within a view.
However, sometimes is it useful to get to this data from within a
helper (for instance to provide a default value). Calling yield from
within the helper method won't work,
I updated the patch to remove the deprecation of
ActiveSupport::Cache.expand_cache_key. This method still uses some
Rails specific variables which really isn't ideal. Otherwise the only
deprecations are SynchronizedMemoryStore, CompressedMemCacheStore,
and :expires_in on FileStore#read. The two cla
> This I'm not so sold on, expires in is a memcached implementation
> specific feature and adding it to all the other cache stores simply
> seems to add overhead for very little gain. No one is seriously going
> to be using MemoryStore or FileStore in production and wanting to use
> :expires_in.
Lighthouse ticket:
https://rails.lighthouseapp.com/projects/8994-ruby-on-rails/tickets/4452
I have recently been working on some gems that utilize
ActiveSupport::Cache and ran into some issues with the different
implementations handling the same functionality differently. One of
the issues was th
ActiveRecord has a bug and and couple of inconsistencies when rolling
back multistatement transactions. There is a patch to fix the issues
at
https://rails.lighthouseapp.com/projects/8994-ruby-on-rails/tickets/2991-after-transaction-patch
--~--~-~--~~~---~--~~
You
I don't think the implementation core functionality should be based on
on how MySql handles something.
:limit, :offset, and :order are only options parameters that I'm
exposing in case the developer needs them. If you want to iterate over
the entire data set you can certainly do so and these opti
If you need to process a million records, you may want to process them
in a specific order like most recently update first since those may be
the most important and it may take a while to finish the batch.
Processing them in primary key order may well mean that the most
important records are proce
I created a patch to allow :order clauses on this topic
http://groups.google.com/group/rubyonrails-core/browse_thread/thread/66df9e4241fdc0a7
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups "Ruby
on Rails: Core" group
Created patch which allows find_batches to use :order, :limit,
and :offset for better control over batch processing. Also supports
models which don't use integer primary keys.
http://rails.lighthouseapp.com/projects/8994-ruby-on-rails/tickets/2137-allow-find_batches-to-use-order-limit-and-offset#
28 matches
Mail list logo