Github user nickva commented on the issue:
https://github.com/apache/couchdb-snappy/pull/9
To check for memory leaks ran the snappy compression decompression test in
a loop
```
rebar shell
==> couchdb-snappy (shell)
Erlang/OTP 20 [erts-9.3.3.1] [source] [64-
Github user nickva commented on the issue:
https://github.com/apache/couchdb-ibrowse/pull/1
Noticed there was another ibrowse commit after that which fixed an issue
related to ipv6 being the new default:
https://github.com/cmullaparthi/ibrowse/commit
Github user nickva commented on the issue:
https://github.com/apache/couchdb-snappy/pull/7
Yap, merged it via ASF. Still not sure why build failed. I had tried
compiling with rebar3 locally and that worked as well. A different C compiler
or libc version... who knows...
---
Github user nickva closed the pull request at:
https://github.com/apache/couchdb-snappy/pull/8
---
Github user nickva commented on the issue:
https://github.com/apache/couchdb-snappy/pull/7
Yap all travis builds fail apparently now. Here is a change to the README
file:
https://travis-ci.org/apache/couchdb-snappy/builds/395127559?utm_source=github_status&utm_medium=notification
---
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-snappy/pull/8
Test commit
TEST DON'T MERGE
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/cloudant/couchdb-snappy test-branch
Alternatively yo
Github user nickva commented on the issue:
https://github.com/apache/couchdb-snappy/pull/7
Trying to use rebar3 maybe and it can't build the nif properly?
---
Github user nickva commented on the issue:
https://github.com/apache/couchdb-snappy/pull/7
Not sure why travis doesn't work but local tests do pass with 21:
```
make check
rebar compile
==> snappy (compile)
rebar eunit
==> snappy (eunit)
C
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-snappy/pull/7
Build with Erlang 21
Issue #1396
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/cloudant/couchdb-snappy allow-erlang-21
Alternatively you
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-config/pull/18
Use callback directive for config_listener behaviour
This knocks out a few dialyzer errors such as:
`Callback info about the config_listener behaviour is not available`
It is
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-khash/pull/9
Handle deprecated random module
Use a compile time check for platform versions, then a macro conditional in
a
separate rand module.
Removed redunant beam file compile rule from
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-khash/pull/8
Fix iterator expiry test
This test started failing with Erlang 20.0 release. The reason is that
opaaque
NIF resources stopped being identifed as empty binaries in Erlang so
previously
Github user nickva commented on the issue:
https://github.com/apache/couchdb-khash/pull/7
Will do! Good idea. I just noticed it was wonky
---
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-khash/pull/7
Replace deprecated random module
Replaced with crypto:rand_uniform functions.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/apache/couchdb
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-config/pull/16
Add longer timeouts for operations which could write to disk
It turns out that 5 seconds is often not enough in a severly throttled test
environment, and simple operations like config:set
Github user nickva commented on the issue:
https://github.com/apache/couchdb-ets-lru/pull/5
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
Closing, moved to monorepo PR https://github.com/apache/couchdb/pull/539
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user nickva closed the pull request at:
https://github.com/apache/couchdb-couch/pull/236
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user nickva closed the pull request at:
https://github.com/apache/couchdb-couch-replicator/pull/64
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch-replicator/pull/64
Another, cleaned up version of this PR was issued after the monorepo merge.
https://github.com/apache/couchdb/pull/470
Keep this one open for a bit longer just in
Github user nickva commented on the issue:
https://github.com/apache/couchdb-chttpd/pull/158
Closing this PR. Another one was issued after the monorepo merge.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user nickva closed the pull request at:
https://github.com/apache/couchdb-chttpd/pull/158
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user nickva closed the pull request at:
https://github.com/apache/couchdb-couch/pull/238
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/238
Closing this one. A new PR was issued after the monorepo merge
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch-mrview/pull/73
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/239
I ended up not using `update_counter` because update_counter/4 is not
available in R16B which CouchDB still supports. Tried simply replacing
update_element with update_counter/3 but that
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch-mrview/pull/69
Allow limiting maximum document body size
This is a companion commit to this one:
https://github.com/apache/couchdb-couch/pull/235
COUCHDB-2992
You can merge this pull
Github user nickva commented on a diff in the pull request:
https://github.com/apache/couchdb-couch/pull/239#discussion_r106270691
--- Diff: src/couch_lru.erl ---
@@ -16,48 +16,57 @@
-include_lib("couch/include/couch_db.hrl").
new() ->
-{g
Github user nickva commented on a diff in the pull request:
https://github.com/apache/couchdb-couch/pull/239#discussion_r106269371
--- Diff: src/couch_lru.erl ---
@@ -16,48 +16,57 @@
-include_lib("couch/include/couch_db.hrl").
new() ->
-{g
Github user nickva commented on a diff in the pull request:
https://github.com/apache/couchdb-couch/pull/239#discussion_r106262038
--- Diff: src/couch_lru.erl ---
@@ -16,48 +16,57 @@
-include_lib("couch/include/couch_db.hrl").
new() ->
-{g
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/239
@davisp @eiri
I tried a 5 minute eprof run. At default 500 max_dbs_open. Cluster had 20
continuous 1-to-n replications.
```
master + ETS lru (this PR
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch/pull/239
An ETS based couch_lru
The interface is the same as previous couch_lru.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/cloudant/couchdb
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/237
Tried it locally. Looks good.
+1 After tests are fixed.
We'd want to performance tests this at some point.
---
If your project is set up for it, you can reply to this
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-chttpd/pull/158
63012 scheduler
This is part of a set of PRs to merge the new scheduling replicator
Main PR is this: apache/couchdb-couch-replicator#64
Top level PR to gather and help test
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch/pull/238
Add _replication_start_time to the doc field validation section.
This is part of a set of PRs to merge the new scheduling replicator
Main PR is this: https://github.com/apache/couchdb
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch-replicator/pull/64
63012 scheduler
Pull request to merge scheduling replicator work to ASF master.
This repository has most of the changes. The feature overall consists of
updates to 3
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
And it would be non-portable (on windows at least)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
@davisp python has a way to check file descriptor limits, I am not sure
erlang has it:
```
In [9]: resource.getrlimit(resource.RLIMIT_NOFILE)
Out[9]: (65536, 65536
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
Updated with using hibernate `wakeup` trick and not having the silly timer
in there as well
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
@eiri the eviction policy would be used when we are bumping against the
limit. This could happen under a minute time-frame. A minute could be for
example 1 hour (maybe it should be?). Then if
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
@eiri lru would be used to remove and makes room immediately if we are
bumping against the limit, specifically it would remove non sys db shards
first.
idle-based remove is to clean
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
Hibernate thing can be simplified then. I'll give that a try.
On _users I see it with a few replications running. And I see replication
sources and _users shard appear in the ets
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
Ah makes sense we don't do that on delete so was wondering about that.
You're right will update code.
Also what do you think of not using hibernate and instead just a gc
Github user nickva commented on a diff in the pull request:
https://github.com/apache/couchdb-couch/pull/236#discussion_r105918717
--- Diff: src/couch_db_updater.erl ---
@@ -1454,3 +1468,44 @@ default_security_object(_DbName) ->
"
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/235
@sagelywizard should be a bit better now. But ended up with another pr for
fabric as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/235
Related pr:
* https://github.com/apache/couchdb-fabric/pull/91
* https://github.com/apache/couchdb-chttpd/pull/157
---
If your project is set up for it, you can reply to
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-fabric/pull/91
Allow limiting maximum document body size
Update doc function to check and validate document body sizes
Main implementation is in PR:
https://github.com/apache/couchdb-couch
Github user nickva commented on the issue:
https://github.com/apache/couchdb-fabric/pull/91
Related to: https://github.com/apache/couchdb-couch/pull/235
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/235
@sagelywizard I added a validating version of the function. The other code
like cleanup_index_files can still call the old function and not crash on size
limit change.
---
If your project is
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
I updated the code to both bump down the open dbs count on close as well as
remove entries from Lru. The last bit was shamelessly stolen from Eric.
---
If your project is set up for it, you
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
@eiri Nice find. Thank you. Yap I see when we delete a non-sys_db we forget
to handle lru properly.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user nickva commented on a diff in the pull request:
https://github.com/apache/couchdb-couch/pull/236#discussion_r105822402
--- Diff: src/couch_server.erl ---
@@ -173,6 +174,15 @@ hash_admin_passwords(Persist) ->
config:set("admins"
Github user nickva commented on a diff in the pull request:
https://github.com/apache/couchdb-couch/pull/236#discussion_r105822026
--- Diff: src/couch_db_updater.erl ---
@@ -1454,3 +1468,28 @@ default_security_object(_DbName) ->
"
Github user nickva commented on a diff in the pull request:
https://github.com/apache/couchdb-couch/pull/235#discussion_r105816177
--- Diff: src/couch_doc.erl ---
@@ -125,7 +125,14 @@
doc_to_json_obj(#doc{id=Id,deleted=Del,body=Body,revs={Start, RevIds
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
Since first (dirty) is_idle check was moved to `couch_db_updater` wonder if
it is even worth doing this additional read in couch_server
```
close_db_if_idle(DbName) ->
c
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
@davisp @sagelywizard
Used process dictionary to fix two issues:
1) Lack of configuration. Configuration is now read from
`couchdb.idle_check_timeout` with a default of 6
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/236
@sagelywizard we are doing a check first is see if shard is idle to
hopefully avoid backing up couch_server with messages.
Also because this will close idle shards, couch_server should
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch/pull/236
Close idle dbs
Previously idle dbs, especially sys_dbs like _replicator shards once opened
once for scanning would stay open forever. In a large cluster with many
_replicator shards
Github user nickva commented on the issue:
https://github.com/apache/couchdb-chttpd/pull/157
Tests are failing because it needs changes from
https://github.com/apache/couchdb-couch/pull/235
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-chttpd/pull/157
Allow limiting maximum document body size
This is the HTTP layer and some tests. The actual checking is done in couch
application's from_json_obj/1 function.
If a document i
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch/pull/235
Allow limiting maximum document body size
Configuration is via the `couchdb.max_document_size`. In the past that
was implemented as a maximum http request body size and this finally
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch-replicator/pull/63
Prevent change feeds from being stuck
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/cloudant/couchdb-couch-replicator
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch-replicator/pull/62
+1
Tested it seems to work well.
Log shows
```
ignoring empty shards/8000-9fff/blah/_replicator.1489084820
```
---
If your project is set up
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-documentation/pull/105
Add documentation for the new `max_http_request_size` parameter
COUCHDB-2992
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user nickva commented on the issue:
https://github.com/apache/couchdb-documentation/pull/91
+1
Thanks, Robert
I will rebase and merge. Had indent the shell line a bit so it renders
appropriately.
---
If your project is set up for it, you can reply to this
Github user nickva commented on the issue:
https://github.com/apache/couchdb-documentation/pull/104
+1
Thank you, @flimzy !
I'll rebase on latest master and merge.
---
If your project is set up for it, you can reply to this email and have your
reply appear on G
Github user nickva commented on the issue:
https://github.com/apache/couchdb-documentation/pull/98
+1
Thank you, @Lars
I'll rebase on master, remove an extra empty to line keep linter happy, and
then merge your change
---
If your project is set up for it, yo
Github user nickva commented on the issue:
https://github.com/apache/couchdb-documentation/pull/95
+1 Thank you for your contribution!
I'll rebase on latest master, fix the tests and merge it
---
If your project is set up for it, you can reply to this email and have
Github user nickva commented on the issue:
https://github.com/apache/couchdb-documentation/pull/102
+1
Thank you for your contribution.
I'll rebase and merge. Make check fails because some lines are longer than
80 chars long. I'll fix that du
Github user nickva commented on the issue:
https://github.com/apache/couchdb-documentation/pull/99
+1
Very nice! Much appreciated
I'll merge it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user nickva commented on the issue:
https://github.com/apache/couchdb-documentation/pull/101
+1
Thanks you @satabin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user nickva commented on a diff in the pull request:
https://github.com/apache/couchdb-chttpd/pull/156#discussion_r104803991
--- Diff: src/chttpd.erl ---
@@ -630,7 +630,7 @@ body(#httpd{mochi_req=MochiReq, req_body=ReqBody}) ->
undefi
Github user nickva commented on a diff in the pull request:
https://github.com/apache/couchdb-chttpd/pull/156#discussion_r104803095
--- Diff: src/chttpd.erl ---
@@ -630,7 +630,7 @@ body(#httpd{mochi_req=MochiReq, req_body=ReqBody}) ->
undefi
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/233
Related PRs:
* https://github.com/apache/couchdb-couch-replicator/pull/61
* https://github.com/apache/couchdb-chttpd/pull/156
---
If your project is set up for it, you can reply
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch-replicator/pull/61
Fix unit test after renaming max_document_size config parameter
`couchdb.max_document_size` was renamed to `httpd.max_http_request_size`
The unit tests was testing how
Github user nickva commented on the issue:
https://github.com/apache/couchdb-chttpd/pull/156
Related PR for couch: https://github.com/apache/couchdb-couch/pull/233
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch/pull/233
Rename max_document_size to max_http_request_size
`max_document_size` is implemented as `max_http_request_size`. There was no
real check for document size. In some cases the implementation
Github user nickva commented on the issue:
https://github.com/apache/couchdb-chttpd/pull/156
I think for simplicity I'll try `httpd` since the setting is used by both
clustered and un-clustered interface. And have 2 more prs for couch and
replicator (it used in a test
Github user nickva commented on the issue:
https://github.com/apache/couchdb-chttpd/pull/156
Wonder if we should use the httpd config section since
`max_http_request_size` related more the HTTP layer than db core.
---
If your project is set up for it, you can reply to this email and
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch-replicator/pull/60
Remove unused mp_parse_doc function from replicator
It was left accidentally when merging Cloudant's dbcore work.
COUCHDB-2992
You can merge this pull request into
Github user nickva commented on the issue:
https://github.com/apache/couchdb-chttpd/pull/156
Hmm I don't see where we are using this additional mp_parse_doc function
from replicator:
https://github.com/apache/couchdb-couch-replicator/blob/maste
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch/pull/231
Fix `badarith` error in couch_db:get_db_info call
When folding we account for a previous `null`, `undefined`, or a number.
However
btree:size/1 returns 0, `nil` or a number
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch-replicator/pull/59
Make sure to log db as well as doc in replicator logs.
COUCHDB-3316
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch-replicator/pull/58
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch-replicator/pull/57
Restore adding some jitter-ed sleep to shard scanning code.
Otherwise a large cluster will flood replicator manager with potentially
hundreds of thousands of `{resume, Shard
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/230
It seems functionally there is a difference between the current code path
and the new one:
If we don't add callbacks in case db is found, we don't update `Options` to
inclu
Github user nickva commented on the issue:
https://github.com/apache/couchdb-fabric/pull/89
Can now PUT aprox a 1Gb attachment
Could never do that before:
```
./attach_large.py --size=10 --mintime=120
...
> sent data in 0.000 sec res: N
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-fabric/pull/89
Prevent attachment upload from timing out during update_docs fabric call
Currently if an attachment was large enough or the connection was slow
enough
such that it took more than
Github user nickva commented on the issue:
https://github.com/apache/couchdb-fabric/pull/88
_bulk_get and open_revs=all work
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user nickva commented on the issue:
https://github.com/apache/couchdb-chttpd/pull/155
Note: I couldn't reproduce this error locally. However I have seen similar
failure when working on couch pr resulting from config being in the path of
basic doc operations. The fi
Github user nickva commented on the issue:
https://github.com/apache/couchdb-chttpd/pull/155
+1
```
chttpd_error_info_tests: error_info_test (module
'chttpd_error_info_tests')...[0.010 s] ok
===
All
Github user nickva commented on the issue:
https://github.com/apache/couchdb-documentation/pull/100
+1
Thank you!
I'll merge the commit and update master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch-replicator/pull/56
Use string formatting to shorten document ID during logging.
Previously used an explicit lists:sublist call but value was never used
anywhere besides the log message
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/226
@rnewson @kxepal pr to use infinity for replicator as well
https://github.com/apache/couchdb-couch-replicator/pull/55
---
If your project is set up for it, you can reply to this email and
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch-replicator/pull/55
Switch replicator max_document_id_length config to use infinity
Default value switched to be `infinity` instead of 0
COUCHDB-3291
You can merge this pull request into a Git
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/226
Will do. Maybe we should even have it as a config function
get_integer_or_infinity? But that's for another PR...
---
If your project is set up for it, you can reply to this email and
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch/pull/226
Before setting the limit (python code):
```
d1 = {"_id":"2"*21}
d.update([d1])
>>> [(True, u'2'
GitHub user nickva opened a pull request:
https://github.com/apache/couchdb-couch/pull/226
Allow limiting length of document ID
Previously it was not possibly to define a maxum document ID size. That
meant
large document ID would hit various limitations and corner cases. For
Github user nickva commented on the issue:
https://github.com/apache/couchdb-couch-replicator/pull/54
Thanks @iilyak !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user nickva commented on the issue:
https://github.com/apache/couchdb-fabric/pull/86
@tonysun83 Yap it will still end up retrying in open_revs. It is not ideal
but for now we can keep it like that
---
If your project is set up for it, you can reply to this email and have your
1 - 100 of 354 matches
Mail list logo