[jira] [Updated] (MESOS-5999) Re-evaluate libevent SSL socket EOF semantics in libprocess

2016-08-05 Thread Greg Mann (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Mann updated MESOS-5999:
-
Summary: Re-evaluate libevent SSL socket EOF semantics in libprocess  (was: 
Re-evaluate socket EOF semantics in libprocess)

> Re-evaluate libevent SSL socket EOF semantics in libprocess
> ---
>
> Key: MESOS-5999
> URL: https://issues.apache.org/jira/browse/MESOS-5999
> Project: Mesos
>  Issue Type: Bug
>  Components: libprocess
>Reporter: Greg Mann
>  Labels: mesosphere
>
> While debugging some issues related to libprocess 
> finalization/reinitialization, [~bmahler] pointed out that libprocess doesn't 
> strictly adhere to the expected behavior of Unix sockets after an EOF is 
> received. If a socket receives EOF, this means only that the writer on the 
> other end has closed the write end of its socket. However, the other end may 
> still be interested in reading. Libprocess currently treats a received EOF as 
> if {{shutdown()}} has been called on the socket, and both ends have been 
> closed for both reading and writing (see 
> [here|https://github.com/apache/mesos/blob/1.0.0/3rdparty/libprocess/src/libevent_ssl_socket.cpp#L349-L360]
>  and 
> [here|https://github.com/apache/mesos/blob/1.0.0/3rdparty/libprocess/src/process.cpp#L692-L697]).
> We should consider changing the EOF semantics of the {{Socket}} object to 
> more closely match those of Unix sockets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5028) Copy provisioner cannot replace directory with symlink

2016-08-05 Thread Gilbert Song (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15410349#comment-15410349
 ] 

Gilbert Song commented on MESOS-5028:
-

[~zhitao], I have been trying manually for a while. Seems like this issue may 
not be straightforwards. Since the extra layer contains two things:
1. the whiteout file.
2. the symlink file.

Currently we get an error when copying, because we cannot use a non-directory 
to overwrite a directory. However, even if we can overwrite it finally, the 
symlink will be deleted by the labeled whiteout file.

Ok, I have one solution for this, by introducing extra whiteout filtering logic 
into each layers in copy backend. Seems like we may have to do that.

> Copy provisioner cannot replace directory with symlink
> --
>
> Key: MESOS-5028
> URL: https://issues.apache.org/jira/browse/MESOS-5028
> Project: Mesos
>  Issue Type: Bug
>  Components: containerization
>Reporter: Zhitao Li
>Assignee: Gilbert Song
>
> I'm trying to play with the new image provisioner on our custom docker 
> images, but one of layer failed to get copied, possibly due to a dangling 
> symlink.
> Error log with Glog_v=1:
> {quote}
> I0324 05:42:48.926678 15067 copy.cpp:127] Copying layer path 
> '/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs'
>  to rootfs 
> '/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6'
> E0324 05:42:49.028506 15062 slave.cpp:3773] Container 
> '5f05be6c-c970-4539-aa64-fd0eef2ec7ae' for executor 'test' of framework 
> 75932a89-1514-4011-bafe-beb6a208bb2d-0004 failed to start: Collect failed: 
> Collect failed: Failed to copy layer: cp: cannot overwrite directory 
> ‘/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6/etc/apt’
>  with non-directory
> {quote}
> Content of 
> _/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs/etc/apt_
>  points to a non-existing absolute path (cannot provide exact path but it's a 
> result of us trying to mount apt keys into docker container at build time).
> I believe what happened is that we executed a script at build time, which 
> contains equivalent of:
> {quote}
> rm -rf /etc/apt/* && ln -sf /build-mount-point/ /etc/apt
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4049) Allow user to control behavior of partitioned agents/tasks

2016-08-05 Thread Vinod Kone (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15410286#comment-15410286
 ] 

Vinod Kone commented on MESOS-4049:
---

commit 5220f77582a14d4cdd0a907ba8af6e9db87d8ab7
Author: Neil Conway 
Date:   Fri Aug 5 16:41:41 2016 -0700

Future-proofed some slave removal tests.

These tests relied on the implementation detail that when an agent is
removed from the list of registered agents, the master sends a
ShutdownSlaveMessage to the agent. That will change in the future
(MESOS-4049). To prepare for this future planned behavior, adjust these
tests to be more robust by instead checking for the invocation of the
`slaveLost` scheduler callback.

Review: https://reviews.apache.org/r/50422/

commit 8a0b17a11560f482628e890094e83400fa805a80
Author: Neil Conway 
Date:   Fri Aug 5 16:41:35 2016 -0700

Cleaned up comments in fault tolerance tests.

Review: https://reviews.apache.org/r/50418/

commit 5de96fa4b3e603553dbae3f06aff6621b268a7be
Author: Neil Conway 
Date:   Fri Aug 5 16:41:28 2016 -0700

Improved consistency of test code for partitioning an agent.

Removed unnecessary `Clock::settle` calls: `Clock::settle` should
typically only be used when a test case does not have an easy way to
wait for a _specific_ event to occur. In this case, `Clock::settle` was
unnecessary because the test code immediately proceeded to `AWAIT_READY`
for a more specific event.

Also fixed up some whitespace.

Review: https://reviews.apache.org/r/50417/

commit 60dbd347b409c788776760a8270965d943b6806e
Author: Neil Conway 
Date:   Fri Aug 5 16:41:18 2016 -0700

Added more assertions to master code.

Review: https://reviews.apache.org/r/50416/
commit 29925658291be60bda7af7f83225d743e8d24870
Author: Neil Conway 
Date:   Fri Aug 5 16:41:10 2016 -0700

Added more expectations to TASK_LOST test cases.

Check the reason and source of TASK_LOST status updates, replaced
ASSERT_ with EXPECT_ in various places where EXPECT_ is more
appropriate.

Review: https://reviews.apache.org/r/50235/


> Allow user to control behavior of partitioned agents/tasks
> --
>
> Key: MESOS-4049
> URL: https://issues.apache.org/jira/browse/MESOS-4049
> Project: Mesos
>  Issue Type: Improvement
>  Components: master, slave
>Reporter: Neil Conway
>Assignee: Neil Conway
>  Labels: mesosphere
>
> At present, if an agent is partitioned away from the master, the master waits 
> for a period of time (see MESOS-4048) before deciding that the agent is dead. 
> Then it marks the agent as lost, sends {{TASK_LOST}} messages for all the 
> tasks running on the agent, and instructs the agent to shutdown.
> Although this behavior is desirable for some/many users, it is not ideal for 
> everyone. For example:
> * Some users might want to aggressively start a new replacement task (e.g., 
> after one or two ping timeouts are missed); then when the old copy of the 
> task comes back, they might want to make an intelligent decision about how to 
> reconcile this situation (e.g., kill old, kill new, allow both to continue 
> running).
> * Some frameworks might want different behavior from other frameworks, or to 
> treat some tasks differently from other tasks. For example, if a task has a 
> huge amount of state that would need to be regenerated to spin up another 
> instance, the user might want to wait longer before starting a new task to 
> increase the chance that the old task will reappear.
> To do this, we'd need to change task state so that a task can go from 
> {{RUNNING}} to a new state (say {{UNKNOWN}} or {{WANDERING}}), and then from 
> that state back to {{RUNNING}} (or perhaps we could keep the current 
> "mark-lost-after-timeout" behavior as an option, in which case {{UNKNOWN}} 
> could also transition to {{LOST}}). The agent would also keep its old 
> {{slaveId}} when it reconnects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5931) Support auto backend in Unified Containerizer.

2016-08-05 Thread Gilbert Song (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gilbert Song updated MESOS-5931:

Story Points: 3  (was: 5)

> Support auto backend in Unified Containerizer.
> --
>
> Key: MESOS-5931
> URL: https://issues.apache.org/jira/browse/MESOS-5931
> Project: Mesos
>  Issue Type: Improvement
>  Components: containerization
>Reporter: Gilbert Song
>Assignee: Gilbert Song
>  Labels: backend, containerizer, mesosphere
>
> Currently in Unified Containerizer, copy backend will be selected by default. 
> This is not ideal, especially for production environment. It would take a 
> long time to prepare an huge container image to copy it from the store to 
> provisioner.
> Ideally, we should support `auto backend`, which would 
> automatically/intelligently select the best/optimal backend for image 
> provisioner if user does not specify one from the agent flag.
> We should have a logic design first in this ticket, to determine how we want 
> to choose the right backend (e.g., overlayfs or aufs should be preferred if 
> available from the kernel).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-6000) Overlayfs backend cannot support the image with numerous layers.

2016-08-05 Thread Gilbert Song (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-6000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15410265#comment-15410265
 ] 

Gilbert Song commented on MESOS-6000:
-

Yes we can, but somehow it seems to me we should fix this asap, since most 
images in production should contain tons of custom changes. Users would hit 
this sooner or later.

> Overlayfs backend cannot support the image with numerous layers.
> 
>
> Key: MESOS-6000
> URL: https://issues.apache.org/jira/browse/MESOS-6000
> Project: Mesos
>  Issue Type: Bug
>  Components: containerization
> Environment: Ubuntu 15
> Or any os with kernel 4.0+
>Reporter: Gilbert Song
>  Labels: backend, containerizer, overlayfs
>
> This issue is exposed when testing unified containerizer with overlayfs 
> backend using any image with numerous layers (e.g., 38 layers). It can be 
> reproduced by using this image: `gilbertsong/cirros:34` (for anyone who wants 
> to test it out).
> Here is the partial log:
> {noformat}
> I0805 21:50:02.631873 11136 provisioner.cpp:315] Provisioning image rootfs 
> '/tmp/provisioner/containers/36c69ade-69db-4de3-9cd4-18b9b9c99e73/backends/overlay/rootfses/ba255b76-8326-4611-beb5-002f202b52e0'
>  for container 36c69ade-69db-4de3-9cd4-18b9b9c99e73 using overlay backend
> I0805 21:50:02.632990 11138 overlay.cpp:156] Provisioning image rootfs with 
> overlayfs: 
> 'lowerdir=/tmp/mesos/store/docker/layers/0b3552c520cda8ec7b81c0245f62e14dfb5214b7dce4da70d4124c19b64c70b9/rootfs:/tmp/mesos/store/docker/layers/dcdb76907cb758920f4eaabc338a9bf229be790a184bdd1e963480a03a7eacfa/rootfs:/tmp/mesos/store/docker/layers/c562a889ec2700b07f1bfb00c8de7f35568420b62d1e8160962628fcb9852f32/rootfs:/tmp/mesos/store/docker/layers/e27aafe45078f82cd69baa397b72ecfb4e8778040bfd8241aa0f4189612f294e/rootfs:/tmp/mesos/store/docker/layers/f40f6d4dc7496d9936ba9c2c1aa5a28a0b8b08f58eaeeec7f17330926f0acd8f/rootfs:/tmp/mesos/store/docker/layers/4e73c54df43c79d944a7b9d365f73464e547a857ad723aae285f9803c506a99f/rootfs:/tmp/mesos/store/docker/layers/0381bc1361243e9e0adf522135e31d85edeb837948985d4a6cf37ba6af21f2c7/rootfs:/tmp/mesos/store/docker/layers/8c4a4d5185324d29d1e4b36d8178842f4bcfcc7cc264666ab1b355668adfc97f/rootfs:/tmp/mesos/store/docker/layers/56157927e47e4774f858d3706262dc2e5921be0e7d0ceb741645513746fdedea/rootfs:/tmp/mesos/store/docker/layers/630c68a1627d8f6582569cc008f9a06b893fa7894dc290635dd454b00e894873/rootfs:/tmp/mesos/store/docker/layers/82273458148226630bbea90cf12b72cdc867faf152049361d1e97c8a426ae009/rootfs:/tmp/mesos/store/docker/layers/7fb31183c817b9bc0db5697d70753df4b1bf8e1012cd8c834931b595d846ab54/rootfs:/tmp/mesos/store/docker/layers/31c4f23aaccfd222b73622bfef533b52912f19e7569a568f7d58d40f645bcd86/rootfs:/tmp/mesos/store/docker/layers/16896c1cea9f9c911668eef2ad0af8aa2db689c27127169880e1df75d5a9151b/rootfs:/tmp/mesos/store/docker/layers/8a9f03cff6171de90b2fe6e00d00b17993f8811814be4e91b0da1ae55dfa616d/rootfs:/tmp/mesos/store/docker/layers/5fb7fd9fb5b0fdde1bd2f8b071b23f8ae8c0a685056a40fd22dbe88f37a4fde9/rootfs:/tmp/mesos/store/docker/layers/64988a98c6a682fef16bd69e3d48cc49024d1c0f6526c4b21169fa3f81dc7d60/rootfs:/tmp/mesos/store/docker/layers/253759d741f48d5741b14f3e4d19ea165f326b15ec404fcc0d4741c274d0af29/rootfs:/tmp/mesos/store/docker/layers/5f2b648ae86db5bfc8f2b01739fd561325d91a7f905f6599032b78065ba929fa/rootfs:/tmp/mesos/store/docker/layers/700018f2c4c21668e0935aae9edc09f0f5df72ca2e58c0cdf5d61313018f3528/rootfs:/tmp/mesos/store/docker/layers/99016394fafebd1dad47724121998aecf0782da93eedc9bd9d6d2af478a798a4/rootfs:/tmp/mesos/store/docker/layers/9a711ed91d6a74f0c4d5e7ea1e44c9e3d0e90e3083e889625eb765acddfd4ea6/rootfs:/tmp/mesos/store/docker/layers/d9c00b1f35232ab21f2ac182194acd381ec096dc8c25c4d40b2e84695e2d6b91/rootfs:/tmp/mesos/store/docker/layers/10e9d3ad1d49d649a63536a227b8f93e8dd8f0bcde1ab127f0c62da26ea09469/rootfs:/tmp/mesos/store/docker/layers/819293665a9f634bf2e149b2441ee82ddc74d38e7a6d0c90491bffe5e6b5ae22/rootfs:/tmp/mesos/store/docker/layers/a0ed5b96a63de8623f77e7107b888f2945fcf069dd4440f3cafd13de408a8fb9/rootfs:/tmp/mesos/store/docker/layers/2756be24c0982a13a523a5ce04535578c27f00fc3a77321dfdb537ea5d323470/rootfs:/tmp/mesos/store/docker/layers/b820bc0393598343b8f05e6e61b899e00ee1e72cfce9b70dd04d004794ca02a6/rootfs:/tmp/mesos/store/docker/layers/8245da6b1667e1b5aac028f6729620459595e7148340d4db6a9f912cda7523a1/rootfs:/tmp/mesos/store/docker/layers/87886e37285d0182cfb4f83dec9239ce6cc094e699a6de3c4507789ec6a80870/rootfs:/tmp/mesos/store/docker/layers/8568fa3ad8b47e7565a9833b2950d023cf82558b40a0508ed155ebe71e8fa8b2/rootfs:/tmp/mesos/store/docker/layers/98986dcc611643e2291913352f0f2df37ac5b068072b7f1d01ed87532cba4f23/rootfs:/tmp/mesos/store/docker/layers/b96b0a4229bbb38fc20da48f539c8473fa255fd42282d97ac4de071342c57c58/r

[jira] [Updated] (MESOS-6000) Overlayfs backend cannot support the image with numerous layers.

2016-08-05 Thread Gilbert Song (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-6000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gilbert Song updated MESOS-6000:

Description: 
This issue is exposed when testing unified containerizer with overlayfs backend 
using any image with numerous layers (e.g., 38 layers). It can be reproduced by 
using this image: `gilbertsong/cirros:34` (for anyone who wants to test it out).

Here is the partial log:
{noformat}
I0805 21:50:02.631873 11136 provisioner.cpp:315] Provisioning image rootfs 
'/tmp/provisioner/containers/36c69ade-69db-4de3-9cd4-18b9b9c99e73/backends/overlay/rootfses/ba255b76-8326-4611-beb5-002f202b52e0'
 for container 36c69ade-69db-4de3-9cd4-18b9b9c99e73 using overlay backend
I0805 21:50:02.632990 11138 overlay.cpp:156] Provisioning image rootfs with 
overlayfs: 
'lowerdir=/tmp/mesos/store/docker/layers/0b3552c520cda8ec7b81c0245f62e14dfb5214b7dce4da70d4124c19b64c70b9/rootfs:/tmp/mesos/store/docker/layers/dcdb76907cb758920f4eaabc338a9bf229be790a184bdd1e963480a03a7eacfa/rootfs:/tmp/mesos/store/docker/layers/c562a889ec2700b07f1bfb00c8de7f35568420b62d1e8160962628fcb9852f32/rootfs:/tmp/mesos/store/docker/layers/e27aafe45078f82cd69baa397b72ecfb4e8778040bfd8241aa0f4189612f294e/rootfs:/tmp/mesos/store/docker/layers/f40f6d4dc7496d9936ba9c2c1aa5a28a0b8b08f58eaeeec7f17330926f0acd8f/rootfs:/tmp/mesos/store/docker/layers/4e73c54df43c79d944a7b9d365f73464e547a857ad723aae285f9803c506a99f/rootfs:/tmp/mesos/store/docker/layers/0381bc1361243e9e0adf522135e31d85edeb837948985d4a6cf37ba6af21f2c7/rootfs:/tmp/mesos/store/docker/layers/8c4a4d5185324d29d1e4b36d8178842f4bcfcc7cc264666ab1b355668adfc97f/rootfs:/tmp/mesos/store/docker/layers/56157927e47e4774f858d3706262dc2e5921be0e7d0ceb741645513746fdedea/rootfs:/tmp/mesos/store/docker/layers/630c68a1627d8f6582569cc008f9a06b893fa7894dc290635dd454b00e894873/rootfs:/tmp/mesos/store/docker/layers/82273458148226630bbea90cf12b72cdc867faf152049361d1e97c8a426ae009/rootfs:/tmp/mesos/store/docker/layers/7fb31183c817b9bc0db5697d70753df4b1bf8e1012cd8c834931b595d846ab54/rootfs:/tmp/mesos/store/docker/layers/31c4f23aaccfd222b73622bfef533b52912f19e7569a568f7d58d40f645bcd86/rootfs:/tmp/mesos/store/docker/layers/16896c1cea9f9c911668eef2ad0af8aa2db689c27127169880e1df75d5a9151b/rootfs:/tmp/mesos/store/docker/layers/8a9f03cff6171de90b2fe6e00d00b17993f8811814be4e91b0da1ae55dfa616d/rootfs:/tmp/mesos/store/docker/layers/5fb7fd9fb5b0fdde1bd2f8b071b23f8ae8c0a685056a40fd22dbe88f37a4fde9/rootfs:/tmp/mesos/store/docker/layers/64988a98c6a682fef16bd69e3d48cc49024d1c0f6526c4b21169fa3f81dc7d60/rootfs:/tmp/mesos/store/docker/layers/253759d741f48d5741b14f3e4d19ea165f326b15ec404fcc0d4741c274d0af29/rootfs:/tmp/mesos/store/docker/layers/5f2b648ae86db5bfc8f2b01739fd561325d91a7f905f6599032b78065ba929fa/rootfs:/tmp/mesos/store/docker/layers/700018f2c4c21668e0935aae9edc09f0f5df72ca2e58c0cdf5d61313018f3528/rootfs:/tmp/mesos/store/docker/layers/99016394fafebd1dad47724121998aecf0782da93eedc9bd9d6d2af478a798a4/rootfs:/tmp/mesos/store/docker/layers/9a711ed91d6a74f0c4d5e7ea1e44c9e3d0e90e3083e889625eb765acddfd4ea6/rootfs:/tmp/mesos/store/docker/layers/d9c00b1f35232ab21f2ac182194acd381ec096dc8c25c4d40b2e84695e2d6b91/rootfs:/tmp/mesos/store/docker/layers/10e9d3ad1d49d649a63536a227b8f93e8dd8f0bcde1ab127f0c62da26ea09469/rootfs:/tmp/mesos/store/docker/layers/819293665a9f634bf2e149b2441ee82ddc74d38e7a6d0c90491bffe5e6b5ae22/rootfs:/tmp/mesos/store/docker/layers/a0ed5b96a63de8623f77e7107b888f2945fcf069dd4440f3cafd13de408a8fb9/rootfs:/tmp/mesos/store/docker/layers/2756be24c0982a13a523a5ce04535578c27f00fc3a77321dfdb537ea5d323470/rootfs:/tmp/mesos/store/docker/layers/b820bc0393598343b8f05e6e61b899e00ee1e72cfce9b70dd04d004794ca02a6/rootfs:/tmp/mesos/store/docker/layers/8245da6b1667e1b5aac028f6729620459595e7148340d4db6a9f912cda7523a1/rootfs:/tmp/mesos/store/docker/layers/87886e37285d0182cfb4f83dec9239ce6cc094e699a6de3c4507789ec6a80870/rootfs:/tmp/mesos/store/docker/layers/8568fa3ad8b47e7565a9833b2950d023cf82558b40a0508ed155ebe71e8fa8b2/rootfs:/tmp/mesos/store/docker/layers/98986dcc611643e2291913352f0f2df37ac5b068072b7f1d01ed87532cba4f23/rootfs:/tmp/mesos/store/docker/layers/b96b0a4229bbb38fc20da48f539c8473fa255fd42282d97ac4de071342c57c58/rootfs:/tmp/mesos/store/docker/layers/2b9fd04b9d5a26be9cc150f408657c553ed9479a43ff60c0bbf8f586c3dfd1e9/rootfs:/tmp/mesos/store/docker/layers/0d27f8e693fb23b476ae409bd008492a92b355aa3ac10cf536dabd458758af55/rootfs:/tmp/mesos/store/docker/layers/500e7eced838c4822a111abdb64fce8e7f3c0ecaf3d47157331b0cd30ebac4dc/rootfs:/tmp/mesos/store/docker/layers/c42d375c72b4e709bc0eeda368591277fa73836dfd5597fe98e2524c8587536e/rootfs:/tmp/mesos/store/docker/layers/34fa5867b8b0888ea3b718df9ad2925b8f7f50b6583b7cbdfabd826bfe5c6de8/rootfs:/tmp/mesos/store/docker/layers/3690474eb5b4b26fdfbd89c6e159e8cc376ca76ef48032a30fa6aafd56337880/rootfs,upperdir=/tmp/provisioner/containers/36c69ade-69db-4de3-9cd4-18b9b9c99e73/backends/overlay/scratch/ba255b76-832

[jira] [Commented] (MESOS-6000) Overlayfs backend cannot support the image with numerous layers.

2016-08-05 Thread Zhitao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-6000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15410258#comment-15410258
 ] 

Zhitao Li commented on MESOS-6000:
--

Another suggestion: until this gets fixed, maybe we can detect these very long 
option and cowardly refuse to provision the image upfront with a clear message?

> Overlayfs backend cannot support the image with numerous layers.
> 
>
> Key: MESOS-6000
> URL: https://issues.apache.org/jira/browse/MESOS-6000
> Project: Mesos
>  Issue Type: Bug
>  Components: containerization
> Environment: Ubuntu 15
> Or any os with kernel 4.0+
>Reporter: Gilbert Song
>  Labels: backend, containerizer, overlayfs
>
> This issue is exposed when testing unified containerizer with overlayfs 
> backend using any image with numerous layers (e.g., 38 layers). It can be 
> reproduced by using this image: `gilbertsong/cirros:34` (for anyone who wants 
> to test it out).
> Here is the partial log:
> {noformat}
> I0805 21:50:02.631873 11136 provisioner.cpp:315] Provisioning image rootfs 
> '/tmp/provisioner/containers/36c69ade-69db-4de3-9cd4-18b9b9c99e73/backends/overlay/rootfses/ba255b76-8326-4611-beb5-002f202b52e0'
>  for container 36c69ade-69db-4de3-9cd4-18b9b9c99e73 using overlay backend
> I0805 21:50:02.632990 11138 overlay.cpp:156] Provisioning image rootfs with 
> overlayfs: 
> 'lowerdir=/tmp/mesos/store/docker/layers/0b3552c520cda8ec7b81c0245f62e14dfb5214b7dce4da70d4124c19b64c70b9/rootfs:/tmp/mesos/store/docker/layers/dcdb76907cb758920f4eaabc338a9bf229be790a184bdd1e963480a03a7eacfa/rootfs:/tmp/mesos/store/docker/layers/c562a889ec2700b07f1bfb00c8de7f35568420b62d1e8160962628fcb9852f32/rootfs:/tmp/mesos/store/docker/layers/e27aafe45078f82cd69baa397b72ecfb4e8778040bfd8241aa0f4189612f294e/rootfs:/tmp/mesos/store/docker/layers/f40f6d4dc7496d9936ba9c2c1aa5a28a0b8b08f58eaeeec7f17330926f0acd8f/rootfs:/tmp/mesos/store/docker/layers/4e73c54df43c79d944a7b9d365f73464e547a857ad723aae285f9803c506a99f/rootfs:/tmp/mesos/store/docker/layers/0381bc1361243e9e0adf522135e31d85edeb837948985d4a6cf37ba6af21f2c7/rootfs:/tmp/mesos/store/docker/layers/8c4a4d5185324d29d1e4b36d8178842f4bcfcc7cc264666ab1b355668adfc97f/rootfs:/tmp/mesos/store/docker/layers/56157927e47e4774f858d3706262dc2e5921be0e7d0ceb741645513746fdedea/rootfs:/tmp/mesos/store/docker/layers/630c68a1627d8f6582569cc008f9a06b893fa7894dc290635dd454b00e894873/rootfs:/tmp/mesos/store/docker/layers/82273458148226630bbea90cf12b72cdc867faf152049361d1e97c8a426ae009/rootfs:/tmp/mesos/store/docker/layers/7fb31183c817b9bc0db5697d70753df4b1bf8e1012cd8c834931b595d846ab54/rootfs:/tmp/mesos/store/docker/layers/31c4f23aaccfd222b73622bfef533b52912f19e7569a568f7d58d40f645bcd86/rootfs:/tmp/mesos/store/docker/layers/16896c1cea9f9c911668eef2ad0af8aa2db689c27127169880e1df75d5a9151b/rootfs:/tmp/mesos/store/docker/layers/8a9f03cff6171de90b2fe6e00d00b17993f8811814be4e91b0da1ae55dfa616d/rootfs:/tmp/mesos/store/docker/layers/5fb7fd9fb5b0fdde1bd2f8b071b23f8ae8c0a685056a40fd22dbe88f37a4fde9/rootfs:/tmp/mesos/store/docker/layers/64988a98c6a682fef16bd69e3d48cc49024d1c0f6526c4b21169fa3f81dc7d60/rootfs:/tmp/mesos/store/docker/layers/253759d741f48d5741b14f3e4d19ea165f326b15ec404fcc0d4741c274d0af29/rootfs:/tmp/mesos/store/docker/layers/5f2b648ae86db5bfc8f2b01739fd561325d91a7f905f6599032b78065ba929fa/rootfs:/tmp/mesos/store/docker/layers/700018f2c4c21668e0935aae9edc09f0f5df72ca2e58c0cdf5d61313018f3528/rootfs:/tmp/mesos/store/docker/layers/99016394fafebd1dad47724121998aecf0782da93eedc9bd9d6d2af478a798a4/rootfs:/tmp/mesos/store/docker/layers/9a711ed91d6a74f0c4d5e7ea1e44c9e3d0e90e3083e889625eb765acddfd4ea6/rootfs:/tmp/mesos/store/docker/layers/d9c00b1f35232ab21f2ac182194acd381ec096dc8c25c4d40b2e84695e2d6b91/rootfs:/tmp/mesos/store/docker/layers/10e9d3ad1d49d649a63536a227b8f93e8dd8f0bcde1ab127f0c62da26ea09469/rootfs:/tmp/mesos/store/docker/layers/819293665a9f634bf2e149b2441ee82ddc74d38e7a6d0c90491bffe5e6b5ae22/rootfs:/tmp/mesos/store/docker/layers/a0ed5b96a63de8623f77e7107b888f2945fcf069dd4440f3cafd13de408a8fb9/rootfs:/tmp/mesos/store/docker/layers/2756be24c0982a13a523a5ce04535578c27f00fc3a77321dfdb537ea5d323470/rootfs:/tmp/mesos/store/docker/layers/b820bc0393598343b8f05e6e61b899e00ee1e72cfce9b70dd04d004794ca02a6/rootfs:/tmp/mesos/store/docker/layers/8245da6b1667e1b5aac028f6729620459595e7148340d4db6a9f912cda7523a1/rootfs:/tmp/mesos/store/docker/layers/87886e37285d0182cfb4f83dec9239ce6cc094e699a6de3c4507789ec6a80870/rootfs:/tmp/mesos/store/docker/layers/8568fa3ad8b47e7565a9833b2950d023cf82558b40a0508ed155ebe71e8fa8b2/rootfs:/tmp/mesos/store/docker/layers/98986dcc611643e2291913352f0f2df37ac5b068072b7f1d01ed87532cba4f23/rootfs:/tmp/mesos/store/docker/layers/b96b0a4229bbb38fc20da48f539c8473fa255fd42282d97ac4de071342c57c58/rootfs:/tmp/mesos/stor

[jira] [Created] (MESOS-6002) The whiteout file cannot be removed correctly using aufs backend.

2016-08-05 Thread Gilbert Song (JIRA)
Gilbert Song created MESOS-6002:
---

 Summary: The whiteout file cannot be removed correctly using aufs 
backend.
 Key: MESOS-6002
 URL: https://issues.apache.org/jira/browse/MESOS-6002
 Project: Mesos
  Issue Type: Bug
  Components: containerization
 Environment: Ubuntu 14, Ubuntu 12
Or any os with aufs module
Reporter: Gilbert Song


The whiteout file is not removed correctly when using the aufs backend in 
unified containerizer. It can be verified by this unit test with the aufs 
manually specified.

{noformat}
[20:11:24] : [Step 10/10] [ RUN  ] 
ProvisionerDockerPullerTest.ROOT_INTERNET_CURL_Whiteout
[20:11:24]W: [Step 10/10] I0805 20:11:24.986734 24295 cluster.cpp:155] 
Creating default 'local' authorizer
[20:11:25]W: [Step 10/10] I0805 20:11:25.001153 24295 leveldb.cpp:174] 
Opened db in 14.308627ms
[20:11:25]W: [Step 10/10] I0805 20:11:25.003731 24295 leveldb.cpp:181] 
Compacted db in 2.558329ms
[20:11:25]W: [Step 10/10] I0805 20:11:25.003749 24295 leveldb.cpp:196] 
Created db iterator in 3086ns
[20:11:25]W: [Step 10/10] I0805 20:11:25.003754 24295 leveldb.cpp:202] 
Seeked to beginning of db in 595ns
[20:11:25]W: [Step 10/10] I0805 20:11:25.003758 24295 leveldb.cpp:271] 
Iterated through 0 keys in the db in 314ns
[20:11:25]W: [Step 10/10] I0805 20:11:25.003769 24295 replica.cpp:776] 
Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
[20:11:25]W: [Step 10/10] I0805 20:11:25.004086 24315 recover.cpp:451] 
Starting replica recovery
[20:11:25]W: [Step 10/10] I0805 20:11:25.004251 24312 recover.cpp:477] 
Replica is in EMPTY status
[20:11:25]W: [Step 10/10] I0805 20:11:25.004546 24314 replica.cpp:673] 
Replica in EMPTY status received a broadcasted recover request from 
__req_res__(5640)@172.30.2.105:36006
[20:11:25]W: [Step 10/10] I0805 20:11:25.004607 24312 recover.cpp:197] 
Received a recover response from a replica in EMPTY status
[20:11:25]W: [Step 10/10] I0805 20:11:25.004762 24313 recover.cpp:568] 
Updating replica status to STARTING
[20:11:25]W: [Step 10/10] I0805 20:11:25.004776 24314 master.cpp:375] 
Master 21665992-d47e-402f-a00c-6f8fab613019 (ip-172-30-2-105.mesosphere.io) 
started on 172.30.2.105:36006
[20:11:25]W: [Step 10/10] I0805 20:11:25.004787 24314 master.cpp:377] Flags 
at startup: --acls="" --agent_ping_timeout="15secs" 
--agent_reregister_timeout="10mins" --allocation_interval="1secs" 
--allocator="HierarchicalDRF" --authenticate_agents="true" 
--authenticate_frameworks="true" --authenticate_http_frameworks="true" 
--authenticate_http_readonly="true" --authenticate_http_readwrite="true" 
--authenticators="crammd5" --authorizers="local" 
--credentials="/tmp/0z753P/credentials" --framework_sorter="drf" --help="false" 
--hostname_lookup="true" --http_authenticators="basic" 
--http_framework_authenticators="basic" --initialize_driver_logging="true" 
--log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" 
--max_agent_ping_timeouts="5" --max_completed_frameworks="50" 
--max_completed_tasks_per_framework="1000" --quiet="false" 
--recovery_agent_removal_limit="100%" --registry="replicated_log" 
--registry_fetch_timeout="1mins" --registry_store_timeout="100secs" 
--registry_strict="true" --root_submissions="true" --user_sorter="drf" 
--version="false" --webui_dir="/usr/local/share/mesos/webui" 
--work_dir="/tmp/0z753P/master" --zk_session_timeout="10secs"
[20:11:25]W: [Step 10/10] I0805 20:11:25.004920 24314 master.cpp:427] 
Master only allowing authenticated frameworks to register
[20:11:25]W: [Step 10/10] I0805 20:11:25.004930 24314 master.cpp:441] 
Master only allowing authenticated agents to register
[20:11:25]W: [Step 10/10] I0805 20:11:25.004935 24314 master.cpp:454] 
Master only allowing authenticated HTTP frameworks to register
[20:11:25]W: [Step 10/10] I0805 20:11:25.004942 24314 credentials.hpp:37] 
Loading credentials for authentication from '/tmp/0z753P/credentials'
[20:11:25]W: [Step 10/10] I0805 20:11:25.005018 24314 master.cpp:499] Using 
default 'crammd5' authenticator
[20:11:25]W: [Step 10/10] I0805 20:11:25.005101 24314 http.cpp:883] Using 
default 'basic' HTTP authenticator for realm 'mesos-master-readonly'
[20:11:25]W: [Step 10/10] I0805 20:11:25.005152 24314 http.cpp:883] Using 
default 'basic' HTTP authenticator for realm 'mesos-master-readwrite'
[20:11:25]W: [Step 10/10] I0805 20:11:25.005192 24314 http.cpp:883] Using 
default 'basic' HTTP authenticator for realm 'mesos-master-scheduler'
[20:11:25]W: [Step 10/10] I0805 20:11:25.005230 24314 master.cpp:579] 
Authorization enabled
[20:11:25]W: [Step 10/10] I0805 20:11:25.005297 24315 hierarchical.cpp:151] 
Initialized hierarchical allocator process
[20:11:25]W: [Step 10/10] I0805 20:11:25.005312 24312 
whitelist_watcher.cpp:77] No whitelist given
[20:11:25

[jira] [Created] (MESOS-6001) Aufs backend cannot support the image with numerous layers.

2016-08-05 Thread Gilbert Song (JIRA)
Gilbert Song created MESOS-6001:
---

 Summary: Aufs backend cannot support the image with numerous 
layers.
 Key: MESOS-6001
 URL: https://issues.apache.org/jira/browse/MESOS-6001
 Project: Mesos
  Issue Type: Bug
  Components: containerization
 Environment: Ubuntu 14, Ubuntu 12
Or any other os with aufs module
Reporter: Gilbert Song


This issue was exposed in this unit test 
`ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller` by manually specifying 
the `bind` backend. Most likely mounting the aufs with specific options is 
limited by string length.

{noformat}
[20:13:07] : [Step 10/10] [ RUN  ] 
DockerRuntimeIsolatorTest.ROOT_CURL_INTERNET_DockerDefaultEntryptRegistryPuller
[20:13:07]W: [Step 10/10] I0805 20:13:07.615844 23416 cluster.cpp:155] 
Creating default 'local' authorizer
[20:13:07]W: [Step 10/10] I0805 20:13:07.624106 23416 leveldb.cpp:174] 
Opened db in 8.148813ms
[20:13:07]W: [Step 10/10] I0805 20:13:07.627252 23416 leveldb.cpp:181] 
Compacted db in 3.126629ms
[20:13:07]W: [Step 10/10] I0805 20:13:07.627275 23416 leveldb.cpp:196] 
Created db iterator in 4410ns
[20:13:07]W: [Step 10/10] I0805 20:13:07.627282 23416 leveldb.cpp:202] 
Seeked to beginning of db in 763ns
[20:13:07]W: [Step 10/10] I0805 20:13:07.627287 23416 leveldb.cpp:271] 
Iterated through 0 keys in the db in 491ns
[20:13:07]W: [Step 10/10] I0805 20:13:07.627301 23416 replica.cpp:776] 
Replica recovered with log positions 0 -> 0 with 1 holes and 0 unlearned
[20:13:07]W: [Step 10/10] I0805 20:13:07.627563 23434 recover.cpp:451] 
Starting replica recovery
[20:13:07]W: [Step 10/10] I0805 20:13:07.627800 23437 recover.cpp:477] 
Replica is in EMPTY status
[20:13:07]W: [Step 10/10] I0805 20:13:07.628113 23431 replica.cpp:673] 
Replica in EMPTY status received a broadcasted recover request from 
__req_res__(5852)@172.30.2.138:44256
[20:13:07]W: [Step 10/10] I0805 20:13:07.628243 23430 recover.cpp:197] 
Received a recover response from a replica in EMPTY status
[20:13:07]W: [Step 10/10] I0805 20:13:07.628365 23437 recover.cpp:568] 
Updating replica status to STARTING
[20:13:07]W: [Step 10/10] I0805 20:13:07.628744 23432 master.cpp:375] 
Master dd755a55-0dd1-4d2d-9a49-812a666015cb (ip-172-30-2-138.mesosphere.io) 
started on 172.30.2.138:44256
[20:13:07]W: [Step 10/10] I0805 20:13:07.628758 23432 master.cpp:377] Flags 
at startup: --acls="" --agent_ping_timeout="15secs" 
--agent_reregister_timeout="10mins" --allocation_interval="1secs" 
--allocator="HierarchicalDRF" --authenticate_agents="true" 
--authenticate_frameworks="true" --authenticate_http_frameworks="true" 
--authenticate_http_readonly="true" --authenticate_http_readwrite="true" 
--authenticators="crammd5" --authorizers="local" 
--credentials="/tmp/OZHDIQ/credentials" --framework_sorter="drf" --help="false" 
--hostname_lookup="true" --http_authenticators="basic" 
--http_framework_authenticators="basic" --initialize_driver_logging="true" 
--log_auto_initialize="true" --logbufsecs="0" --logging_level="INFO" 
--max_agent_ping_timeouts="5" --max_completed_frameworks="50" 
--max_completed_tasks_per_framework="1000" --quiet="false" 
--recovery_agent_removal_limit="100%" --registry="replicated_log" 
--registry_fetch_timeout="1mins" --registry_store_timeout="100secs" 
--registry_strict="true" --root_submissions="true" --user_sorter="drf" 
--version="false" --webui_dir="/usr/local/share/mesos/webui" 
--work_dir="/tmp/OZHDIQ/master" --zk_session_timeout="10secs"
[20:13:07]W: [Step 10/10] I0805 20:13:07.628893 23432 master.cpp:427] 
Master only allowing authenticated frameworks to register
[20:13:07]W: [Step 10/10] I0805 20:13:07.628900 23432 master.cpp:441] 
Master only allowing authenticated agents to register
[20:13:07]W: [Step 10/10] I0805 20:13:07.628902 23432 master.cpp:454] 
Master only allowing authenticated HTTP frameworks to register
[20:13:07]W: [Step 10/10] I0805 20:13:07.628906 23432 credentials.hpp:37] 
Loading credentials for authentication from '/tmp/OZHDIQ/credentials'
[20:13:07]W: [Step 10/10] I0805 20:13:07.628999 23432 master.cpp:499] Using 
default 'crammd5' authenticator
[20:13:07]W: [Step 10/10] I0805 20:13:07.629041 23432 http.cpp:883] Using 
default 'basic' HTTP authenticator for realm 'mesos-master-readonly'
[20:13:07]W: [Step 10/10] I0805 20:13:07.629114 23432 http.cpp:883] Using 
default 'basic' HTTP authenticator for realm 'mesos-master-readwrite'
[20:13:07]W: [Step 10/10] I0805 20:13:07.629166 23432 http.cpp:883] Using 
default 'basic' HTTP authenticator for realm 'mesos-master-scheduler'
[20:13:07]W: [Step 10/10] I0805 20:13:07.629231 23432 master.cpp:579] 
Authorization enabled
[20:13:07]W: [Step 10/10] I0805 20:13:07.629290 23434 
whitelist_watcher.cpp:77] No whitelist given
[20:13:07]W: [Step 10/10] I0805 20:13:07.629302 2343

[jira] [Created] (MESOS-6000) Overlayfs backend cannot support the image with numerous layers.

2016-08-05 Thread Gilbert Song (JIRA)
Gilbert Song created MESOS-6000:
---

 Summary: Overlayfs backend cannot support the image with numerous 
layers.
 Key: MESOS-6000
 URL: https://issues.apache.org/jira/browse/MESOS-6000
 Project: Mesos
  Issue Type: Bug
  Components: containerization
 Environment: Ubuntu 15
Or any os with kernel 4.0+
Reporter: Gilbert Song


This issue is exposed when testing unified containerizer with overlayfs backend 
using any image with numerous layers (e.g., 38 layers). It can be reproduced by 
using this image: `gilbertsong/cirros:34` (for anyone who wants to test it out).

Here is the partial log:
{noformat}
I0805 21:50:02.631873 11136 provisioner.cpp:315] Provisioning image rootfs 
'/tmp/provisioner/containers/36c69ade-69db-4de3-9cd4-18b9b9c99e73/backends/overlay/rootfses/ba255b76-8326-4611-beb5-002f202b52e0'
 for container 36c69ade-69db-4de3-9cd4-18b9b9c99e73 using overlay backend
I0805 21:50:02.632990 11138 overlay.cpp:156] Provisioning image rootfs with 
overlayfs: 
'lowerdir=/tmp/mesos/store/docker/layers/0b3552c520cda8ec7b81c0245f62e14dfb5214b7dce4da70d4124c19b64c70b9/rootfs:/tmp/mesos/store/docker/layers/dcdb76907cb758920f4eaabc338a9bf229be790a184bdd1e963480a03a7eacfa/rootfs:/tmp/mesos/store/docker/layers/c562a889ec2700b07f1bfb00c8de7f35568420b62d1e8160962628fcb9852f32/rootfs:/tmp/mesos/store/docker/layers/e27aafe45078f82cd69baa397b72ecfb4e8778040bfd8241aa0f4189612f294e/rootfs:/tmp/mesos/store/docker/layers/f40f6d4dc7496d9936ba9c2c1aa5a28a0b8b08f58eaeeec7f17330926f0acd8f/rootfs:/tmp/mesos/store/docker/layers/4e73c54df43c79d944a7b9d365f73464e547a857ad723aae285f9803c506a99f/rootfs:/tmp/mesos/store/docker/layers/0381bc1361243e9e0adf522135e31d85edeb837948985d4a6cf37ba6af21f2c7/rootfs:/tmp/mesos/store/docker/layers/8c4a4d5185324d29d1e4b36d8178842f4bcfcc7cc264666ab1b355668adfc97f/rootfs:/tmp/mesos/store/docker/layers/56157927e47e4774f858d3706262dc2e5921be0e7d0ceb741645513746fdedea/rootfs:/tmp/mesos/store/docker/layers/630c68a1627d8f6582569cc008f9a06b893fa7894dc290635dd454b00e894873/rootfs:/tmp/mesos/store/docker/layers/82273458148226630bbea90cf12b72cdc867faf152049361d1e97c8a426ae009/rootfs:/tmp/mesos/store/docker/layers/7fb31183c817b9bc0db5697d70753df4b1bf8e1012cd8c834931b595d846ab54/rootfs:/tmp/mesos/store/docker/layers/31c4f23aaccfd222b73622bfef533b52912f19e7569a568f7d58d40f645bcd86/rootfs:/tmp/mesos/store/docker/layers/16896c1cea9f9c911668eef2ad0af8aa2db689c27127169880e1df75d5a9151b/rootfs:/tmp/mesos/store/docker/layers/8a9f03cff6171de90b2fe6e00d00b17993f8811814be4e91b0da1ae55dfa616d/rootfs:/tmp/mesos/store/docker/layers/5fb7fd9fb5b0fdde1bd2f8b071b23f8ae8c0a685056a40fd22dbe88f37a4fde9/rootfs:/tmp/mesos/store/docker/layers/64988a98c6a682fef16bd69e3d48cc49024d1c0f6526c4b21169fa3f81dc7d60/rootfs:/tmp/mesos/store/docker/layers/253759d741f48d5741b14f3e4d19ea165f326b15ec404fcc0d4741c274d0af29/rootfs:/tmp/mesos/store/docker/layers/5f2b648ae86db5bfc8f2b01739fd561325d91a7f905f6599032b78065ba929fa/rootfs:/tmp/mesos/store/docker/layers/700018f2c4c21668e0935aae9edc09f0f5df72ca2e58c0cdf5d61313018f3528/rootfs:/tmp/mesos/store/docker/layers/99016394fafebd1dad47724121998aecf0782da93eedc9bd9d6d2af478a798a4/rootfs:/tmp/mesos/store/docker/layers/9a711ed91d6a74f0c4d5e7ea1e44c9e3d0e90e3083e889625eb765acddfd4ea6/rootfs:/tmp/mesos/store/docker/layers/d9c00b1f35232ab21f2ac182194acd381ec096dc8c25c4d40b2e84695e2d6b91/rootfs:/tmp/mesos/store/docker/layers/10e9d3ad1d49d649a63536a227b8f93e8dd8f0bcde1ab127f0c62da26ea09469/rootfs:/tmp/mesos/store/docker/layers/819293665a9f634bf2e149b2441ee82ddc74d38e7a6d0c90491bffe5e6b5ae22/rootfs:/tmp/mesos/store/docker/layers/a0ed5b96a63de8623f77e7107b888f2945fcf069dd4440f3cafd13de408a8fb9/rootfs:/tmp/mesos/store/docker/layers/2756be24c0982a13a523a5ce04535578c27f00fc3a77321dfdb537ea5d323470/rootfs:/tmp/mesos/store/docker/layers/b820bc0393598343b8f05e6e61b899e00ee1e72cfce9b70dd04d004794ca02a6/rootfs:/tmp/mesos/store/docker/layers/8245da6b1667e1b5aac028f6729620459595e7148340d4db6a9f912cda7523a1/rootfs:/tmp/mesos/store/docker/layers/87886e37285d0182cfb4f83dec9239ce6cc094e699a6de3c4507789ec6a80870/rootfs:/tmp/mesos/store/docker/layers/8568fa3ad8b47e7565a9833b2950d023cf82558b40a0508ed155ebe71e8fa8b2/rootfs:/tmp/mesos/store/docker/layers/98986dcc611643e2291913352f0f2df37ac5b068072b7f1d01ed87532cba4f23/rootfs:/tmp/mesos/store/docker/layers/b96b0a4229bbb38fc20da48f539c8473fa255fd42282d97ac4de071342c57c58/rootfs:/tmp/mesos/store/docker/layers/2b9fd04b9d5a26be9cc150f408657c553ed9479a43ff60c0bbf8f586c3dfd1e9/rootfs:/tmp/mesos/store/docker/layers/0d27f8e693fb23b476ae409bd008492a92b355aa3ac10cf536dabd458758af55/rootfs:/tmp/mesos/store/docker/layers/500e7eced838c4822a111abdb64fce8e7f3c0ecaf3d47157331b0cd30ebac4dc/rootfs:/tmp/mesos/store/docker/layers/c42d375c72b4e709bc0eeda368591277fa73836dfd5597fe98e2524c8587536e/rootfs:/tmp/mesos/store/docker/layers/34fa5867b8b0888ea3b718df9ad2925b8f7f50b6583b7cb

[jira] [Created] (MESOS-5999) Re-evaluate socket EOF semantics in libprocess

2016-08-05 Thread Greg Mann (JIRA)
Greg Mann created MESOS-5999:


 Summary: Re-evaluate socket EOF semantics in libprocess
 Key: MESOS-5999
 URL: https://issues.apache.org/jira/browse/MESOS-5999
 Project: Mesos
  Issue Type: Bug
  Components: libprocess
Reporter: Greg Mann


While debugging some issues related to libprocess 
finalization/reinitialization, [~bmahler] pointed out that libprocess doesn't 
strictly adhere to the expected behavior of Unix sockets after an EOF is 
received. If a socket receives EOF, this means only that the writer on the 
other end has closed the write end of its socket. However, the other end may 
still be interested in reading. Libprocess currently treats a received EOF as 
if {{shutdown()}} has been called on the socket, and both ends have been closed 
for both reading and writing (see 
[here|https://github.com/apache/mesos/blob/1.0.0/3rdparty/libprocess/src/libevent_ssl_socket.cpp#L349-L360]
 and 
[here|https://github.com/apache/mesos/blob/1.0.0/3rdparty/libprocess/src/process.cpp#L692-L697]).

We should consider changing the EOF semantics of the {{Socket}} object to more 
closely match those of Unix sockets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5998) FINISHED task shown as Active in the UI

2016-08-05 Thread Greg Mann (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15410131#comment-15410131
 ] 

Greg Mann commented on MESOS-5998:
--

[~mgummelt], click on the "More" drop-down under the issue's title; you should 
see an "Attach Files" option there.

> FINISHED task shown as Active in the UI
> ---
>
> Key: MESOS-5998
> URL: https://issues.apache.org/jira/browse/MESOS-5998
> Project: Mesos
>  Issue Type: Bug
>  Components: webui
>Affects Versions: 1.0.0
>Reporter: Michael Gummelt
>
> http://mgummelt-mesos.s3.amazonaws.com/ui_screenshot.png



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5998) FINISHED task shown as Active in the UI

2016-08-05 Thread Michael Gummelt (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15410056#comment-15410056
 ] 

Michael Gummelt commented on MESOS-5998:


http://mgummelt-mesos.s3.amazonaws.com/ui_screenshot.png

> FINISHED task shown as Active in the UI
> ---
>
> Key: MESOS-5998
> URL: https://issues.apache.org/jira/browse/MESOS-5998
> Project: Mesos
>  Issue Type: Bug
>  Components: webui
>Affects Versions: 1.0.0
>Reporter: Michael Gummelt
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (MESOS-5998) FINISHED task shown as Active in the UI

2016-08-05 Thread Michael Gummelt (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Gummelt updated MESOS-5998:
---
Comment: was deleted

(was: http://mgummelt-mesos.s3.amazonaws.com/ui_screenshot.png)

> FINISHED task shown as Active in the UI
> ---
>
> Key: MESOS-5998
> URL: https://issues.apache.org/jira/browse/MESOS-5998
> Project: Mesos
>  Issue Type: Bug
>  Components: webui
>Affects Versions: 1.0.0
>Reporter: Michael Gummelt
>
> http://mgummelt-mesos.s3.amazonaws.com/ui_screenshot.png



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5998) FINISHED task shown as Active in the UI

2016-08-05 Thread Michael Gummelt (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15410057#comment-15410057
 ] 

Michael Gummelt commented on MESOS-5998:


Can I add attachments to this JIRA?  I don't see how.

> FINISHED task shown as Active in the UI
> ---
>
> Key: MESOS-5998
> URL: https://issues.apache.org/jira/browse/MESOS-5998
> Project: Mesos
>  Issue Type: Bug
>  Components: webui
>Affects Versions: 1.0.0
>Reporter: Michael Gummelt
>
> http://mgummelt-mesos.s3.amazonaws.com/ui_screenshot.png



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5998) FINISHED task shown as Active in the UI

2016-08-05 Thread Michael Gummelt (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Gummelt updated MESOS-5998:
---
Description: http://mgummelt-mesos.s3.amazonaws.com/ui_screenshot.png

> FINISHED task shown as Active in the UI
> ---
>
> Key: MESOS-5998
> URL: https://issues.apache.org/jira/browse/MESOS-5998
> Project: Mesos
>  Issue Type: Bug
>  Components: webui
>Affects Versions: 1.0.0
>Reporter: Michael Gummelt
>
> http://mgummelt-mesos.s3.amazonaws.com/ui_screenshot.png



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MESOS-5998) FINISHED task shown as Active in the UI

2016-08-05 Thread Michael Gummelt (JIRA)
Michael Gummelt created MESOS-5998:
--

 Summary: FINISHED task shown as Active in the UI
 Key: MESOS-5998
 URL: https://issues.apache.org/jira/browse/MESOS-5998
 Project: Mesos
  Issue Type: Bug
  Components: webui
Affects Versions: 1.0.0
Reporter: Michael Gummelt






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (MESOS-5028) Copy provisioner cannot replace directory with symlink

2016-08-05 Thread Zhitao Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhitao Li updated MESOS-5028:
-
Comment: was deleted

(was: I think I got the two location of `ln -sf` wrong in original description. 
I'm fixing it now.)

> Copy provisioner cannot replace directory with symlink
> --
>
> Key: MESOS-5028
> URL: https://issues.apache.org/jira/browse/MESOS-5028
> Project: Mesos
>  Issue Type: Bug
>  Components: containerization
>Reporter: Zhitao Li
>Assignee: Gilbert Song
>
> I'm trying to play with the new image provisioner on our custom docker 
> images, but one of layer failed to get copied, possibly due to a dangling 
> symlink.
> Error log with Glog_v=1:
> {quote}
> I0324 05:42:48.926678 15067 copy.cpp:127] Copying layer path 
> '/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs'
>  to rootfs 
> '/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6'
> E0324 05:42:49.028506 15062 slave.cpp:3773] Container 
> '5f05be6c-c970-4539-aa64-fd0eef2ec7ae' for executor 'test' of framework 
> 75932a89-1514-4011-bafe-beb6a208bb2d-0004 failed to start: Collect failed: 
> Collect failed: Failed to copy layer: cp: cannot overwrite directory 
> ‘/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6/etc/apt’
>  with non-directory
> {quote}
> Content of 
> _/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs/etc/apt_
>  points to a non-existing absolute path (cannot provide exact path but it's a 
> result of us trying to mount apt keys into docker container at build time).
> I believe what happened is that we executed a script at build time, which 
> contains equivalent of:
> {quote}
> rm -rf /etc/apt/* && ln -sf /build-mount-point/ /etc/apt
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5028) Copy provisioner cannot replace directory with symlink

2016-08-05 Thread Zhitao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15410024#comment-15410024
 ] 

Zhitao Li commented on MESOS-5028:
--

I think I got the two location of `ln -sf` wrong in original description. I'm 
fixing it now.

> Copy provisioner cannot replace directory with symlink
> --
>
> Key: MESOS-5028
> URL: https://issues.apache.org/jira/browse/MESOS-5028
> Project: Mesos
>  Issue Type: Bug
>  Components: containerization
>Reporter: Zhitao Li
>Assignee: Gilbert Song
>
> I'm trying to play with the new image provisioner on our custom docker 
> images, but one of layer failed to get copied, possibly due to a dangling 
> symlink.
> Error log with Glog_v=1:
> {quote}
> I0324 05:42:48.926678 15067 copy.cpp:127] Copying layer path 
> '/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs'
>  to rootfs 
> '/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6'
> E0324 05:42:49.028506 15062 slave.cpp:3773] Container 
> '5f05be6c-c970-4539-aa64-fd0eef2ec7ae' for executor 'test' of framework 
> 75932a89-1514-4011-bafe-beb6a208bb2d-0004 failed to start: Collect failed: 
> Collect failed: Failed to copy layer: cp: cannot overwrite directory 
> ‘/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6/etc/apt’
>  with non-directory
> {quote}
> Content of 
> _/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs/etc/apt_
>  points to a non-existing absolute path (cannot provide exact path but it's a 
> result of us trying to mount apt keys into docker container at build time).
> I believe what happened is that we executed a script at build time, which 
> contains equivalent of:
> {quote}
> rm -rf /etc/apt/* && ln -sf /build-mount-point/ /etc/apt
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5028) Copy provisioner cannot replace directory with symlink

2016-08-05 Thread Zhitao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15410004#comment-15410004
 ] 

Zhitao Li commented on MESOS-5028:
--

The error looks like this:

/quote
E0805 20:03:32.337234 72361 slave.cpp:4029] Container 
'a5633e96-e9b5-4d29-a55e-fd316c28943b' for executor 'test2' of framework 
662655e7-1b0a-4873-9307-f908fb96bc00- failed to start: Collect failed: 
Failed to copy layer: cp: cannot overwrite directory 
‘/var/lib/mesos/provisioner/containers/a5633e96-e9b5-4d29-a55e-fd316c28943b/backends/copy/rootfses/b3286fd7-1c43-406e-85d7-170cb385a480/etc/cirros’
 with non-directory
/quote

> Copy provisioner cannot replace directory with symlink
> --
>
> Key: MESOS-5028
> URL: https://issues.apache.org/jira/browse/MESOS-5028
> Project: Mesos
>  Issue Type: Bug
>  Components: containerization
>Reporter: Zhitao Li
>Assignee: Gilbert Song
>
> I'm trying to play with the new image provisioner on our custom docker 
> images, but one of layer failed to get copied, possibly due to a dangling 
> symlink.
> Error log with Glog_v=1:
> {quote}
> I0324 05:42:48.926678 15067 copy.cpp:127] Copying layer path 
> '/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs'
>  to rootfs 
> '/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6'
> E0324 05:42:49.028506 15062 slave.cpp:3773] Container 
> '5f05be6c-c970-4539-aa64-fd0eef2ec7ae' for executor 'test' of framework 
> 75932a89-1514-4011-bafe-beb6a208bb2d-0004 failed to start: Collect failed: 
> Collect failed: Failed to copy layer: cp: cannot overwrite directory 
> ‘/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6/etc/apt’
>  with non-directory
> {quote}
> Content of 
> _/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs/etc/apt_
>  points to a non-existing absolute path (cannot provide exact path but it's a 
> result of us trying to mount apt keys into docker container at build time).
> I believe what happened is that we executed a script at build time, which 
> contains equivalent of:
> {quote}
> rm -rf /etc/apt/* && ln -sf /build-mount-point/ /etc/apt
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5028) Copy provisioner cannot replace directory with symlink

2016-08-05 Thread Zhitao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409992#comment-15409992
 ] 

Zhitao Li commented on MESOS-5028:
--

[~gilbert], I managed to reproduce with this Dockerfile:

```
FROM cirros
RUN rm -rf /etc/cirros && ln -sf /tmp /etc/cirros
```
This does not provision in copy backend in my machine.

> Copy provisioner cannot replace directory with symlink
> --
>
> Key: MESOS-5028
> URL: https://issues.apache.org/jira/browse/MESOS-5028
> Project: Mesos
>  Issue Type: Bug
>  Components: containerization
>Reporter: Zhitao Li
>Assignee: Gilbert Song
>
> I'm trying to play with the new image provisioner on our custom docker 
> images, but one of layer failed to get copied, possibly due to a dangling 
> symlink.
> Error log with Glog_v=1:
> {quote}
> I0324 05:42:48.926678 15067 copy.cpp:127] Copying layer path 
> '/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs'
>  to rootfs 
> '/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6'
> E0324 05:42:49.028506 15062 slave.cpp:3773] Container 
> '5f05be6c-c970-4539-aa64-fd0eef2ec7ae' for executor 'test' of framework 
> 75932a89-1514-4011-bafe-beb6a208bb2d-0004 failed to start: Collect failed: 
> Collect failed: Failed to copy layer: cp: cannot overwrite directory 
> ‘/var/lib/mesos/provisioner/containers/5f05be6c-c970-4539-aa64-fd0eef2ec7ae/backends/copy/rootfses/507173f3-e316-48a3-a96e-5fdea9ffe9f6/etc/apt’
>  with non-directory
> {quote}
> Content of 
> _/tmp/mesos/store/docker/layers/5df0888641196b88dcc1b97d04c74839f02a73b8a194a79e134426d6a8fcb0f1/rootfs/etc/apt_
>  points to a non-existing absolute path (cannot provide exact path but it's a 
> result of us trying to mount apt keys into docker container at build time).
> I believe what happened is that we executed a script at build time, which 
> contains equivalent of:
> {quote}
> rm -rf /etc/apt/* && ln -sf /build-mount-point/ /etc/apt
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5929) Total cluster resources on master Mesos UI should have better spacing.

2016-08-05 Thread Charles Allen (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409987#comment-15409987
 ] 

Charles Allen commented on MESOS-5929:
--

Thanks!

> Total cluster resources on master Mesos UI should have better spacing.
> --
>
> Key: MESOS-5929
> URL: https://issues.apache.org/jira/browse/MESOS-5929
> Project: Mesos
>  Issue Type: Wish
>  Components: webui
>Affects Versions: 0.28.2
>Reporter: Charles Allen
>Assignee: Charles Allen
> Fix For: 1.1.0
>
> Attachments: Screen Shot 2016-07-29 at 9.45.25 AM.png
>
>
> The Resources for total cluster resources formats oddly even when there are 
> only a few terabytes of memory and disk across a cluster. I'll try to attach 
> a screenshot shortly.
> This ask is that the data be presented more cleanly.
> One approach could be to scale the number to the appropriate scale. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4823) Implement port forwarding in `network/cni` isolator

2016-08-05 Thread Avinash Sridharan (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409942#comment-15409942
 ] 

Avinash Sridharan commented on MESOS-4823:
--

[~leecalcote] the `network/cni` isolator is used only within the 
`MesosContainerizer` which is a container run time of its own. In short if you 
are using the `network/cni` isolator you wouldn't be using it with a container 
run time outside Mesos. 

> Implement port forwarding in `network/cni` isolator
> ---
>
> Key: MESOS-4823
> URL: https://issues.apache.org/jira/browse/MESOS-4823
> Project: Mesos
>  Issue Type: Task
>  Components: containerization
> Environment: linux
>Reporter: Avinash Sridharan
>Assignee: Avinash Sridharan
>Priority: Critical
>  Labels: mesosphere
>
> Most docker and appc images wish to expose ports that micro-services are 
> listening on, to the outside world. When containers are running on bridged 
> (or ptp) networking this can be achieved by installing port forwarding rules 
> on the agent (using iptables). This can be done in the `network/cni` 
> isolator. 
> The reason we would like this functionality to be implemented in the 
> `network/cni` isolator, and not a CNI plugin, is that the specifications 
> currently do not support specifying port forwarding rules. Further, to 
> install these rules the isolator needs two pieces of information, the exposed 
> ports and the IP address associated with the container. Bother are available 
> to the isolator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-4823) Implement port forwarding in `network/cni` isolator

2016-08-05 Thread Avinash Sridharan (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-4823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409943#comment-15409943
 ] 

Avinash Sridharan commented on MESOS-4823:
--

[~edua...@plumgrid.com]good point. We are actually coming around to this idea 
as well. I am going to go ahead and close this JIRA and send out a proposal on 
implementing port forwarding as a CNI plugin shorlty. 

> Implement port forwarding in `network/cni` isolator
> ---
>
> Key: MESOS-4823
> URL: https://issues.apache.org/jira/browse/MESOS-4823
> Project: Mesos
>  Issue Type: Task
>  Components: containerization
> Environment: linux
>Reporter: Avinash Sridharan
>Assignee: Avinash Sridharan
>Priority: Critical
>  Labels: mesosphere
>
> Most docker and appc images wish to expose ports that micro-services are 
> listening on, to the outside world. When containers are running on bridged 
> (or ptp) networking this can be achieved by installing port forwarding rules 
> on the agent (using iptables). This can be done in the `network/cni` 
> isolator. 
> The reason we would like this functionality to be implemented in the 
> `network/cni` isolator, and not a CNI plugin, is that the specifications 
> currently do not support specifying port forwarding rules. Further, to 
> install these rules the isolator needs two pieces of information, the exposed 
> ports and the IP address associated with the container. Bother are available 
> to the isolator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (MESOS-5929) Total cluster resources on master Mesos UI should have better spacing.

2016-08-05 Thread Charles Allen (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Allen reassigned MESOS-5929:


Assignee: Charles Allen

> Total cluster resources on master Mesos UI should have better spacing.
> --
>
> Key: MESOS-5929
> URL: https://issues.apache.org/jira/browse/MESOS-5929
> Project: Mesos
>  Issue Type: Wish
>  Components: webui
>Affects Versions: 0.28.2
>Reporter: Charles Allen
>Assignee: Charles Allen
> Attachments: Screen Shot 2016-07-29 at 9.45.25 AM.png
>
>
> The Resources for total cluster resources formats oddly even when there are 
> only a few terabytes of memory and disk across a cluster. I'll try to attach 
> a screenshot shortly.
> This ask is that the data be presented more cleanly.
> One approach could be to scale the number to the appropriate scale. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5995) Protobuf JSON deserialisation does not accept numbers formated as strings

2016-08-05 Thread Joseph Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Wu updated MESOS-5995:
-
Shepherd: Joseph Wu
Story Points: 1

> Protobuf JSON deserialisation does not accept numbers formated as strings
> -
>
> Key: MESOS-5995
> URL: https://issues.apache.org/jira/browse/MESOS-5995
> Project: Mesos
>  Issue Type: Bug
>  Components: HTTP API
>Affects Versions: 1.0.0
>Reporter: Tomasz Janiszewski
>Assignee: Tomasz Janiszewski
>Priority: Minor
>
> Proto2 does not specify JSON mappings but 
> [Proto3|https://developers.google.com/protocol-buffers/docs/proto3#json] does 
> and it recommend to map 64bit numbers as a string. Unfortunately Mesos does 
> not accepts strings in places of uint64 and return 400 Bad 
> {quote}
> Request error Failed to convert JSON into Call protobuf: Not expecting a JSON 
> string for field 'value'.
> {quote}
> Is this by purpose or is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5991) Support running docker daemon inside a container using unified containerizer.

2016-08-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/MESOS-5991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409722#comment-15409722
 ] 

Stéphane Cottin commented on MESOS-5991:


It runs as a simple marathon task, without any specific configuration.

The only trick was to have the docker folder as a mounted volume, formatted 
with ext4.
I can't make it run on xfs, even on an external volume.
It seems related to overlayfs, I don't know if nested volumes are possible.

The following isolators are activated:
namespaces/pid,cgroups/cpu,cgroups/mem,filesystem/linux,docker/runtime,network/cni,docker/volume

kernel 4.6 from debian jessie backports.


> Support running docker daemon inside a container using unified containerizer.
> -
>
> Key: MESOS-5991
> URL: https://issues.apache.org/jira/browse/MESOS-5991
> Project: Mesos
>  Issue Type: Epic
>Reporter: Jie Yu
>
> The goal is to develop necessary pieces in unified containerizer so that 
> framework can launch a full fledge docker daemon in a container.
> This will be useful for frameworks like jenkins. The jenkins job can still 
> use docker cli to do build (e.g., `docker build`, `docker push`), but we 
> don't have to install docker daemon on the host anymore.
> Looks like LXD already support that and is pretty stable for some users. We 
> should do some investigation to see what features that's missing in unified 
> containerizer to be able to match what lxd has. Will track all the 
> dependencies in this ticket.
> https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
> Cgroups and user namespaces support are definitely missing pieces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5930) Orphan tasks can show up as running after they have finished.

2016-08-05 Thread Anand Mazumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anand Mazumdar updated MESOS-5930:
--
  Sprint: Mesosphere Sprint 40
Story Points: 3
  Labels: mesosphere  (was: )

> Orphan tasks can show up as running after they have finished.
> -
>
> Key: MESOS-5930
> URL: https://issues.apache.org/jira/browse/MESOS-5930
> Project: Mesos
>  Issue Type: Bug
>  Components: master
>Affects Versions: 1.0.0
>Reporter: Lukas Loesche
>Assignee: Anand Mazumdar
>  Labels: mesosphere
> Fix For: 1.0.1, 1.1.0
>
> Attachments: Screen Shot 2016-07-29 at 19.23.49.png, Screen Shot 
> 2016-07-29 at 19.24.03.png, orphan-running.txt
>
>
> On my cluster I have 111 Orphan Tasks of which some are RUNNING some are 
> FINISHED and some are FAILED. When I open the task details for a FINISHED 
> tasks the following page shows a state of TASK_FINISHED and likewise when I 
> open a FAILED task the details page shows TASK_FAILED.
> However when I open the details for the RUNNING tasks they all have a task 
> state of TASK_FINISHED. None of them is in state TASK_RUNNING.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5991) Support running docker daemon inside a container using unified containerizer.

2016-08-05 Thread Jie Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409697#comment-15409697
 ] 

Jie Yu commented on MESOS-5991:
---

IC, interesting! I'll do some experiments as well. So it presumes it requires 
'root' permission?

> Support running docker daemon inside a container using unified containerizer.
> -
>
> Key: MESOS-5991
> URL: https://issues.apache.org/jira/browse/MESOS-5991
> Project: Mesos
>  Issue Type: Epic
>Reporter: Jie Yu
>
> The goal is to develop necessary pieces in unified containerizer so that 
> framework can launch a full fledge docker daemon in a container.
> This will be useful for frameworks like jenkins. The jenkins job can still 
> use docker cli to do build (e.g., `docker build`, `docker push`), but we 
> don't have to install docker daemon on the host anymore.
> Looks like LXD already support that and is pretty stable for some users. We 
> should do some investigation to see what features that's missing in unified 
> containerizer to be able to match what lxd has. Will track all the 
> dependencies in this ticket.
> https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
> Cgroups and user namespaces support are definitely missing pieces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5991) Support running docker daemon inside a container using unified containerizer.

2016-08-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/MESOS-5991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409690#comment-15409690
 ] 

Stéphane Cottin commented on MESOS-5991:


bundle inside the image, docker is not installed on the host.

> Support running docker daemon inside a container using unified containerizer.
> -
>
> Key: MESOS-5991
> URL: https://issues.apache.org/jira/browse/MESOS-5991
> Project: Mesos
>  Issue Type: Epic
>Reporter: Jie Yu
>
> The goal is to develop necessary pieces in unified containerizer so that 
> framework can launch a full fledge docker daemon in a container.
> This will be useful for frameworks like jenkins. The jenkins job can still 
> use docker cli to do build (e.g., `docker build`, `docker push`), but we 
> don't have to install docker daemon on the host anymore.
> Looks like LXD already support that and is pretty stable for some users. We 
> should do some investigation to see what features that's missing in unified 
> containerizer to be able to match what lxd has. Will track all the 
> dependencies in this ticket.
> https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
> Cgroups and user namespaces support are definitely missing pieces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5991) Support running docker daemon inside a container using unified containerizer.

2016-08-05 Thread Jie Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409686#comment-15409686
 ] 

Jie Yu commented on MESOS-5991:
---

Just to clarify, "killercentury/jenkins-dind" bundles docker daemon inside the 
image, or just use the daemon on the host?

For the former, I'd be surprised that it'll be stable because multiple images 
that bundles docker daemon will conflict with each other (complete on cgroups) 
for sure.

> Support running docker daemon inside a container using unified containerizer.
> -
>
> Key: MESOS-5991
> URL: https://issues.apache.org/jira/browse/MESOS-5991
> Project: Mesos
>  Issue Type: Epic
>Reporter: Jie Yu
>
> The goal is to develop necessary pieces in unified containerizer so that 
> framework can launch a full fledge docker daemon in a container.
> This will be useful for frameworks like jenkins. The jenkins job can still 
> use docker cli to do build (e.g., `docker build`, `docker push`), but we 
> don't have to install docker daemon on the host anymore.
> Looks like LXD already support that and is pretty stable for some users. We 
> should do some investigation to see what features that's missing in unified 
> containerizer to be able to match what lxd has. Will track all the 
> dependencies in this ticket.
> https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
> Cgroups and user namespaces support are definitely missing pieces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5991) Support running docker daemon inside a container using unified containerizer.

2016-08-05 Thread Qian Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409645#comment-15409645
 ] 

Qian Zhang commented on MESOS-5991:
---

Thanks [~kaalh], so it looks like "docker build" and "docker push" can work 
normally when Docker daemon is running in a unified container.

[~jieyu], can you please elaborate what the specific issues are when running 
Docker daemon in a unified container? I'd like to try to reproduce it in my 
test env.

And I have tried to run Docker daemon in an LXD container, it seems not that 
stable as we thought. I just followed the steps in this link 
https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/ to launch an LXD 
container and install Docker daemon in it, then I can run Docker containers in 
it, everything was good. But after I configured the LXD container to a 
privileged container and restarted it, then I found I can not run Docker 
container in it anymore:
{code}
root@docker:~# docker run -it busybox /bin/sh
Unable to find image ‘busybox:latest’ locally
latest: Pulling from library/busybox

8ddc19f16526: Pull complete
Digest: sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6
Status: Downloaded newer image for busybox:latest
docker: Error response from daemon: Cannot start container 
91de8306d177670453d0831b830807516b0863c13c8a6f5325a32fde6baa0835: [10] System 
error: write 
/sys/fs/cgroup/devices/docker/91de8306d177670453d0831b830807516b0863c13c8a6f5325a32fde6baa0835/devices.allow:
 operation not permitted.
{code}

And when I changed the LXD container back to unprivileged and restarted it, 
this time I found the Docker daemon even can not be started:
{code}
Aug 05 09:44:40 docker systemd[1]: Starting Docker Application Container Engine…
Aug 05 09:44:40 docker docker[327]: time=”2016-08-05T09:44:40.805938409Z” 
level=error msg=”[graphdriver] prior storage driver “aufs” failed: driver not 
supported”
Aug 05 09:44:40 docker docker[327]: time=”2016-08-05T09:44:40.806319580Z” 
level=fatal msg=”Error starting daemon: error initializing graphdriver: driver 
not supported”
Aug 05 09:44:40 docker systemd[1]: docker.service: Main process exited, 
code=exited, status=1/FAILURE
Aug 05 09:44:40 docker systemd[1]: Failed to start Docker Application Container 
Engine.
Aug 05 09:44:40 docker systemd[1]: docker.service: Unit entered failed state.
Aug 05 09:44:40 docker systemd[1]: docker.service: Failed with result 
‘exit-code’.
{code}

> Support running docker daemon inside a container using unified containerizer.
> -
>
> Key: MESOS-5991
> URL: https://issues.apache.org/jira/browse/MESOS-5991
> Project: Mesos
>  Issue Type: Epic
>Reporter: Jie Yu
>
> The goal is to develop necessary pieces in unified containerizer so that 
> framework can launch a full fledge docker daemon in a container.
> This will be useful for frameworks like jenkins. The jenkins job can still 
> use docker cli to do build (e.g., `docker build`, `docker push`), but we 
> don't have to install docker daemon on the host anymore.
> Looks like LXD already support that and is pretty stable for some users. We 
> should do some investigation to see what features that's missing in unified 
> containerizer to be able to match what lxd has. Will track all the 
> dependencies in this ticket.
> https://www.stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/
> Cgroups and user namespaces support are definitely missing pieces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MESOS-5997) Fail HTTPS health checks if the task cert can't be validated

2016-08-05 Thread JIRA
Gastón Kleiman created MESOS-5997:
-

 Summary: Fail HTTPS health checks if the task cert can't be 
validated
 Key: MESOS-5997
 URL: https://issues.apache.org/jira/browse/MESOS-5997
 Project: Mesos
  Issue Type: Improvement
Reporter: Gastón Kleiman
Priority: Minor


Marathon doesn't validate the task's cert when performing an HTTPS health check.

We should however consider if it makes sense to add an option to make the Mesos 
health check process validate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5995) Protobuf JSON deserialisation does not accept numbers formated as strings

2016-08-05 Thread Tomasz Janiszewski (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409522#comment-15409522
 ] 

Tomasz Janiszewski commented on MESOS-5995:
---

Review: https://reviews.apache.org/r/50851/

> Protobuf JSON deserialisation does not accept numbers formated as strings
> -
>
> Key: MESOS-5995
> URL: https://issues.apache.org/jira/browse/MESOS-5995
> Project: Mesos
>  Issue Type: Bug
>  Components: HTTP API
>Affects Versions: 1.0.0
>Reporter: Tomasz Janiszewski
>Assignee: Tomasz Janiszewski
>Priority: Minor
>
> Proto2 does not specify JSON mappings but 
> [Proto3|https://developers.google.com/protocol-buffers/docs/proto3#json] does 
> and it recommend to map 64bit numbers as a string. Unfortunately Mesos does 
> not accepts strings in places of uint64 and return 400 Bad 
> {quote}
> Request error Failed to convert JSON into Call protobuf: Not expecting a JSON 
> string for field 'value'.
> {quote}
> Is this by purpose or is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5996) Windows mesos-containerizer crashes

2016-08-05 Thread Lior Zeno (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lior Zeno updated MESOS-5996:
-
Description: 
I've been trying to run a mesos cluster with a windows agent. However, I can't 
run a task on windows since the container always fail with the following 
message: "failed to parse the command flag", followed by a json. I don't have 
the exact message right now, but I'll update the ticket with it as soon as I 
have it. The json did not have a command key and it did not contain the command 
I was trying to run (notepad).

I followed the instructions in the getting started section, and cloned the 
following repository: https://git-wip-us.apache.org/repos/asf/mesos.git. I did 
not use the 1.0 release tarball since it does not include the bootstrap batch 
script for windows.

Steps to reproduce:
# Start a mesos cluster with 4 nodes (3 Ubuntu 14.04 LTS nodes, and 1 Windows 
Server 2012 R2 node).
# Submit an application via Marathon, using a hostname constraint with 
"notepad" as command.


  was:
I've been trying to run a mesos cluster with a windows agent. However, I can't 
run a task on windows since the container always fail with the following 
message: "failed to parse the command flag", followed by a json. I don't have 
the exact message right now, but I'll update the ticket with it as soon as I 
have it. The json did not have a command key and it did not contain the command 
I was trying to run (notepad).

I followed the instructions in the getting started section, and cloned the 
following repository: https://git-wip-us.apache.org/repos/asf/mesos.git. I did 
not use 1.0 since it did not include the bootstrap batch script for windows.

Steps to reproduce:
# Start a mesos cluster with 4 nodes (3 Ubuntu 14.04 LTS nodes, and 1 Windows 
Server 2012 R2 node).
# Submit an application via Marathon, using a hostname constraint with 
"notepad" as command.



> Windows mesos-containerizer crashes
> ---
>
> Key: MESOS-5996
> URL: https://issues.apache.org/jira/browse/MESOS-5996
> Project: Mesos
>  Issue Type: Bug
>  Components: containerization
> Environment: Windows Server 2012 R2
> Marathon 1.2.0 RC6
>Reporter: Lior Zeno
>  Labels: windows
>
> I've been trying to run a mesos cluster with a windows agent. However, I 
> can't run a task on windows since the container always fail with the 
> following message: "failed to parse the command flag", followed by a json. I 
> don't have the exact message right now, but I'll update the ticket with it as 
> soon as I have it. The json did not have a command key and it did not contain 
> the command I was trying to run (notepad).
> I followed the instructions in the getting started section, and cloned the 
> following repository: https://git-wip-us.apache.org/repos/asf/mesos.git. I 
> did not use the 1.0 release tarball since it does not include the bootstrap 
> batch script for windows.
> Steps to reproduce:
> # Start a mesos cluster with 4 nodes (3 Ubuntu 14.04 LTS nodes, and 1 Windows 
> Server 2012 R2 node).
> # Submit an application via Marathon, using a hostname constraint with 
> "notepad" as command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5996) Windows mesos-containerizer crashes

2016-08-05 Thread Lior Zeno (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lior Zeno updated MESOS-5996:
-
Description: 
I've been trying to run a mesos cluster with a windows agent. However, I can't 
run a task on windows since the container always fail with the following 
message: "failed to parse the command flag", followed by a json. I don't have 
the exact message right now, but I'll update the ticket with it as soon as I 
have it. The json did not have a command key and it did not contain the command 
I was trying to run (notepad).

I followed the instructions in the getting started section, and cloned the 
following repository: https://git-wip-us.apache.org/repos/asf/mesos.git. I did 
not use 1.0 since it did not include the bootstrap batch script for windows.

Steps to reproduce:
# Start a mesos cluster with 4 nodes (3 Ubuntu 14.04 LTS nodes, and 1 Windows 
Server 2012 R2 node).
# Submit an application via Marathon, using a hostname constraint with 
"notepad" as command.


  was:
I've been trying to run a mesos cluster with a windows agent. However, I can't 
run a task on windows since the container always fail with the following 
message: "failed to parse the command flag", followed by a json. I don't have 
the exact message right now, but I'll update the ticket with it as soon as I 
have it. 

The json did not have a command key and it did not contain the command I was 
trying to run (notepad).

Steps to reproduce:
# Start a mesos cluster with 4 nodes (3 Ubuntu 14.04 LTS nodes, and 1 Windows 
Server 2012 R2 node).
# Submit an application via Marathon, using a hostname constraint with 
"notepad" as command.



> Windows mesos-containerizer crashes
> ---
>
> Key: MESOS-5996
> URL: https://issues.apache.org/jira/browse/MESOS-5996
> Project: Mesos
>  Issue Type: Bug
>  Components: containerization
> Environment: Windows Server 2012 R2
> Marathon 1.2.0 RC6
>Reporter: Lior Zeno
>  Labels: windows
>
> I've been trying to run a mesos cluster with a windows agent. However, I 
> can't run a task on windows since the container always fail with the 
> following message: "failed to parse the command flag", followed by a json. I 
> don't have the exact message right now, but I'll update the ticket with it as 
> soon as I have it. The json did not have a command key and it did not contain 
> the command I was trying to run (notepad).
> I followed the instructions in the getting started section, and cloned the 
> following repository: https://git-wip-us.apache.org/repos/asf/mesos.git. I 
> did not use 1.0 since it did not include the bootstrap batch script for 
> windows.
> Steps to reproduce:
> # Start a mesos cluster with 4 nodes (3 Ubuntu 14.04 LTS nodes, and 1 Windows 
> Server 2012 R2 node).
> # Submit an application via Marathon, using a hostname constraint with 
> "notepad" as command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MESOS-5996) Windows mesos-containerizer crashes

2016-08-05 Thread Lior Zeno (JIRA)
Lior Zeno created MESOS-5996:


 Summary: Windows mesos-containerizer crashes
 Key: MESOS-5996
 URL: https://issues.apache.org/jira/browse/MESOS-5996
 Project: Mesos
  Issue Type: Bug
  Components: containerization
 Environment: Windows Server 2012 R2
Marathon 1.2.0 RC6
Reporter: Lior Zeno


I've been trying to run a mesos cluster with a windows agent. However, I can't 
run a task on windows since the container always fail with the following 
message: "failed to parse the command flag", followed by a json. I don't have 
the exact message right now, but I'll update the ticket with it as soon as I 
have it. 

The json did not have a command key and it did not contain the command I was 
trying to run (notepad).

Steps to reproduce:
# Start a mesos cluster with 4 nodes (3 Ubuntu 14.04 LTS nodes, and 1 Windows 
Server 2012 R2 node).
# Submit an application via Marathon, using a hostname constraint with 
"notepad" as command.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (MESOS-5828) Modularize Network in replicated_log

2016-08-05 Thread Jay Guo (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-5828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409176#comment-15409176
 ] 

Jay Guo commented on MESOS-5828:


Updated patch chain summary:

||Reviews||Summary||
|https://reviews.apache.org/r/50837|Fixed minor code style.|
|https://reviews.apache.org/r/50491|Added PIDGroup to libprocess.|
|https://reviews.apache.org/r/50492|Switched replicated log to use PIDGroup.|
|https://reviews.apache.org/r/50490|Separated ZooKeeper PIDGroup implementation 
into its own cpp/hpp.|
|https://reviews.apache.org/r/50493|Added `base` to PIDGroup.|
|https://reviews.apache.org/r/50494|Remove `base` from ZooKeeperPIDGroup.|
|https://reviews.apache.org/r/50495|Added PIDGroup module struct.|
|https://reviews.apache.org/r/50496|Added static `createPIDGroup` method to 
LogProcess.|
|https://reviews.apache.org/r/50497|Added new constructors in Log and 
LogProcess.|
|https://reviews.apache.org/r/50498|Added --pid_group flag in master.|
|https://reviews.apache.org/r/50499|Added logic in master/main.cpp to use 
pid_group module.|
|https://reviews.apache.org/r/50838|Updated modules documentation to reflect 
PIDGroup module.|

> Modularize Network in replicated_log
> 
>
> Key: MESOS-5828
> URL: https://issues.apache.org/jira/browse/MESOS-5828
> Project: Mesos
>  Issue Type: Bug
>  Components: replicated log
>Reporter: Jay Guo
>Assignee: Jay Guo
>
> Currently replicated_log relies on Zookeeper for coordinator election. This 
> is done through network abstraction _ZookeeperNetwork_. We need to modularize 
> this part in order to enable replicated_log when using Master 
> contender/detector modules.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5995) Protobuf JSON deserialisation does not accept numbers formated with strings

2016-08-05 Thread Tomasz Janiszewski (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomasz Janiszewski updated MESOS-5995:
--
Summary: Protobuf JSON deserialisation does not accept numbers formated 
with strings  (was: Protobuf JSON deserialisation does not accept int formated 
with strings)

> Protobuf JSON deserialisation does not accept numbers formated with strings
> ---
>
> Key: MESOS-5995
> URL: https://issues.apache.org/jira/browse/MESOS-5995
> Project: Mesos
>  Issue Type: Bug
>  Components: HTTP API
>Affects Versions: 1.0.0
>Reporter: Tomasz Janiszewski
>Assignee: Tomasz Janiszewski
>Priority: Minor
>
> Proto2 does not specify JSON mappings but 
> [Proto3|https://developers.google.com/protocol-buffers/docs/proto3#json] does 
> and it recommend to map 64bit numbers as a string. Unfortunately Mesos does 
> not accepts strings in places of uint64 and return 400 Bad 
> {quote}
> Request error Failed to convert JSON into Call protobuf: Not expecting a JSON 
> string for field 'value'.
> {quote}
> Is this by purpose or is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (MESOS-5995) Protobuf JSON deserialisation does not accept numbers formated as strings

2016-08-05 Thread Tomasz Janiszewski (JIRA)

 [ 
https://issues.apache.org/jira/browse/MESOS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomasz Janiszewski updated MESOS-5995:
--
Summary: Protobuf JSON deserialisation does not accept numbers formated as 
strings  (was: Protobuf JSON deserialisation does not accept numbers formated 
with strings)

> Protobuf JSON deserialisation does not accept numbers formated as strings
> -
>
> Key: MESOS-5995
> URL: https://issues.apache.org/jira/browse/MESOS-5995
> Project: Mesos
>  Issue Type: Bug
>  Components: HTTP API
>Affects Versions: 1.0.0
>Reporter: Tomasz Janiszewski
>Assignee: Tomasz Janiszewski
>Priority: Minor
>
> Proto2 does not specify JSON mappings but 
> [Proto3|https://developers.google.com/protocol-buffers/docs/proto3#json] does 
> and it recommend to map 64bit numbers as a string. Unfortunately Mesos does 
> not accepts strings in places of uint64 and return 400 Bad 
> {quote}
> Request error Failed to convert JSON into Call protobuf: Not expecting a JSON 
> string for field 'value'.
> {quote}
> Is this by purpose or is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (MESOS-5995) Protobuf JSON deserialisation does not accept int formated with strings

2016-08-05 Thread Tomasz Janiszewski (JIRA)
Tomasz Janiszewski created MESOS-5995:
-

 Summary: Protobuf JSON deserialisation does not accept int 
formated with strings
 Key: MESOS-5995
 URL: https://issues.apache.org/jira/browse/MESOS-5995
 Project: Mesos
  Issue Type: Bug
  Components: HTTP API
Affects Versions: 1.0.0
Reporter: Tomasz Janiszewski
Assignee: Tomasz Janiszewski
Priority: Minor


Proto2 does not specify JSON mappings but 
[Proto3|https://developers.google.com/protocol-buffers/docs/proto3#json] does 
and it recommend to map 64bit numbers as a string. Unfortunately Mesos does not 
accepts strings in places of uint64 and return 400 Bad 
{quote}
Request error Failed to convert JSON into Call protobuf: Not expecting a JSON 
string for field 'value'.
{quote}
Is this by purpose or is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)