[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-07 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-371060094
 
 
   @CodingCat I will read it. Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-07 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-371059934
 
 
   @pengzhao-intel Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-06 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370730412
 
 
   @marcoabreu Thank you:) I will commit it soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-05 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370681494
 
 
   @marcoabreu 
   Hello! Could you please retrigger the test?
   It seems that test_operator_gpu.test_deconv has some problem.
   And I have removed  'USE_STABLE_SORT_FOR_PROPOSAL'  building flag.
   
   Thank you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-05 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370285916
 
 
   @pengzhao-intel @xinyu-intel 
   Thank you! I will have a try.
   The performance table
   
   name|time (ms)
   --|--
   BBoxTransformInv|268
   IoUTransformInv|Not used
   FilterBox|22
   CopyScore|18
   ReverseArgsort(unstable sort)|7303
   ReorderProposals|338
   nms(calculate area)|286
   nms(calcuate nms)|7547
   allocate memory for workspace|1
   copy anchor to workspace_proposal|0
   enumrate all shifted anchors|9
   copy workspace_proposals_base to workspace_proposals|162
   assign foreground scores for each anchor|45
   prepare output|3
   Total|16002
   
   Using stable sort to sort anchors (ReverseArgsort) will increase about 3000 
ms.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-05 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370285916
 
 
   @pengzhao-intel @xinyu-intel 
   Thank you! I will have a try.
   The performance table
   
   name|time (ms)
   --|--
   Total|16002
   BBoxTransformInv|268
   IoUTransformInv|Not used
   FilterBox|22
   CopyScore|18
   ReverseArgsort(unstable sort)|7303
   ReorderProposals|338
   nms(calculate area)|286
   nms(calcuate nms)|7547
   allocate memory for workspace|1
   copy anchor to workspace_proposal|0
   enumrate all shifted anchors|9
   copy workspace_proposals_base to workspace_proposals|162
   assign foreground scores for each anchor|45
   prepare output|3
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-05 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370285916
 
 
   @pengzhao-intel @xinyu-intel 
   Thank you! I will have a try.
   The performance table
   
   name|time (ms)
   --|--
   Total|16002
   BBoxTransformInv|268
   IoUTransformInv|Not used
   FilterBox|22
   CopyScore|18
   ReverseArgsort(stable sort)|7303
   ReorderProposals|338
   nms(calculate area)|286
   nms(calcuate nms)|7547
   allocate memory for workspace|1
   copy anchor to workspace_proposal|0
   enumrate all shifted anchors|9
   copy workspace_proposals_base to workspace_proposals|162
   assign foreground scores for each anchor|45
   prepare output|3
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-05 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370447850
 
 
   @marcoabreu 
   The reason to add USE_STABLE_SORT_FOR_PROPOSAL flag is for the cpu/gpu 
consistency test.
   
   Proposal Operator (CPU) uses unstable sort to sort the anchors by scores, 
however Proposal Operator (GPU) uses stable sort. It leads to different outputs 
even if the code is right.
   For the performance, I couldn't replace unstable sort with stable sort in 
Proposal (CPU).
   And for the cpu/gpu consistency test, I think it's necessary to use a 
building flag to test these operators.
   
   Is there any better solution to test cpu/gpu consistency for Proposal? Thank 
you!
   
   I will modify the test case and change the building setting back.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-05 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370447850
 
 
   @marcoabreu 
   The reason to add USE_STABLE_SORT_FOR_PROPOSAL flag is for the cpu/gpu 
consistency test.
   
   Proposal Operator (CPU) uses unstable sort to sort the anchors by scores, 
however Proposal Operator (GPU) uses stable sort. It leads to different outputs 
even if the code is right.
   For the performance, I couldn't replace unstable sort with stable sort in 
Proposal (CPU).
   And for the cpu/gpu consistency test, I think it's necessary to use a 
building flag to test these operators.
   
   Is there any better solution to test cpu/gpu consistency for Proposal? Thank 
you!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-05 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370447850
 
 
   @marcoabreu 
   The reason to add USE_STABLE_SORT_FOR_PROPOSAL flag is for the cpu/gpu 
consistency test.
   
   Proposal Operator (CPU) uses unstable sort to sort the anchors by scores, 
however Proposal Operator (GPU) uses stable sort. It leads to different outputs 
even if the code is right.
   For the performance, I couldn't replace unstable sort with stable sort in 
Proposal (CPU).
   And for the cpu/gpu consistency test, I think it's necessary to use a 
building flag to test these operators.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-05 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370447850
 
 
   @marcoabreu 
   The reason to add USE_STABLE_SORT_FOR_PROPOSAL flag is for the cpu/gpu 
consistency test.
   
   Proposal Operator (CPU) uses unstable sort to sort the anchors by scores, 
however Proposal Operator (GPU) uses stable sort. 
   For the performance, I couldn't replace unstable sort with stable sort in 
Proposal (CPU).
   And for the cpu/gpu consistency test, I think it's necessary to use a 
building flag to test these operator.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-04 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370320488
 
 
   For the cpu/gpu consistency test, it's interesting that the number of valid 
anchors in the CPU implementation and the GPU implementation may be different.
   The reason is that the float precision in CPU's and GPU's is different.
   The size of some anchors may be near the margin of the minimal valid anchor 
and overlap.
   
   The margin of the minimal valid anchor:
   
https://github.com/wkcn/incubator-mxnet/blob/add_multi_proposal_cpu_version/src/operator/contrib/multi_proposal.cc#L141
   
https://github.com/wkcn/incubator-mxnet/blob/add_multi_proposal_cpu_version/src/operator/contrib/multi_proposal.cc#L159
   
   Overlap:
   
https://github.com/wkcn/incubator-mxnet/blob/add_multi_proposal_cpu_version/src/operator/contrib/multi_proposal.cc#L273
   
   I want to create testing sample to avoid these margins.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-04 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370320488
 
 
   For the cpu/gpu consistency test, it's interesting that the number of valid 
anchors in the CPU implementation and the GPU implementation may be different.
   The reason is that the float precision in CPU's and GPU's is different.
   The size of some anchors may be near the margin of the minimal valid anchor 
or the maximal valid anchor.
   
   The margin of the minimal valid anchor:
   
https://github.com/wkcn/incubator-mxnet/blob/add_multi_proposal_cpu_version/src/operator/contrib/multi_proposal.cc#L159
   
   The margin of the maximal valid anchor.
   
https://github.com/wkcn/incubator-mxnet/blob/add_multi_proposal_cpu_version/src/operator/contrib/multi_proposal.cc#L89
   
   I want to create testing sample to avoid these margins.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-04 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370320488
 
 
   For the cpu/gpu consistency, it's interesting that the number of valid 
anchors in the CPU implementation and the GPU implementation may be different.
   The reason is that the float precision in CPU's and GPU's is different.
   The size of some anchors may be near the margin of the minimal valid anchor 
or the maximal valid anchor.
   
   The margin of the minimal valid anchor:
   
https://github.com/wkcn/incubator-mxnet/blob/add_multi_proposal_cpu_version/src/operator/contrib/multi_proposal.cc#L159
   
   The margin of the maximal valid anchor.
   
https://github.com/wkcn/incubator-mxnet/blob/add_multi_proposal_cpu_version/src/operator/contrib/multi_proposal.cc#L89
   
   I want to create testing sample to avoid these margins.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-04 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370285916
 
 
   @pengzhao-intel @xinyu-intel 
   Thank you! I will have a try.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-03 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-369813766
 
 
   @pengzhao-intel Thank you! I will have a try.
   
   ~~I use `#pragma omp paraller for` for each for-loop in Multi Proposal (cpu 
implementation),
   But the performance doesn't improve.
   Maybe it costs a little calculation.~~


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-03 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370133350
 
 
   @pengzhao-intel Here is the testing code.
   https://gist.github.com/wkcn/4a09c142bc9886b45b5a23461bbe4733
   
   I found that I made a mistake that I didn't use `nd.waitall()` to test the 
performance.
   If not using `nd.waitall()`, the calculation will not execute because of 
lazy-evaluation.
   
   performance|CPU(no omp)|CPU(omp)|GPU
   -|---|---|-
   Time(s)|33.899|12.432|4.435
   
   However, when I set the environment variables `MXNET_OMP_MAX_THREADS` or 
`OMP_NUM_THREADS`, it may bring bad performance.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-03 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-370133350
 
 
   @pengzhao-intel Here is the testing code.
   https://gist.github.com/wkcn/4a09c142bc9886b45b5a23461bbe4733
   
   I found that I made a mistake that I didn't use `nd.waitall()` to test the 
performance.
   If not using `nd.waitall()`, the calculation will not execute because of 
lazy-evaluation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-02 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-369813766
 
 
   @pengzhao-intel Thank you! I will have a try.
   
   I use `#pragma omp paraller for` for each for-loop in Multi Proposal (cpu 
implementation),
   But the performance doesn't improve.
   Maybe it costs a little calculation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-02 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-369773050
 
 
   @piiswrong Yes, I will add it.
   I found Proposal OP (CPU implementation) uses **unstable sort** 
(`std::sort`),
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/proposal.cc#L195
   
   and Proposal OP (GPU implementation) uses **stable sort** 
(`thrust::stable_sort_by_key`).
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/proposal.cu#L517
   
   And the accuracy of computation between CPU and GPU is different, it causes 
that NMS selects different boxes between CPU and GPU.
   
   When I replace `std::sort` with `std::stable_sort` in the CPU 
implementation, the cpu/gpu consistency test is passed.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-01 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-369813766
 
 
   @pengzhao-intel Thank you! I will have a try.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-01 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-369797499
 
 
   I wrote a cpu/gpu consistency test for Proposal and MultiProposal.
   I found there is difference between the CPU output and the GPU output for 
mx.nd.contrib.Proposal.
   
   It seems that the index order of the Non-Maximum-Suppression result may be 
different between the CPU implementation and the GPU implementation.
   And another problem is that it may need to add the condition `num_to_keep < 
rpn_post_nms_top_n`
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/proposal.cu#L341
   reference: 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/proposal.cc#L235
   
   Here is the cpu/gpu consistency test.
   ```python
   import mxnet as mx
   import numpy as np
   
   # @with_seed()
   def test_multi_proposal_op():
   # paramters
   feature_stride = 16
   scales = (8, 16, 32)
   ratios = (0.5, 1, 2)
   rpn_pre_nms_top_n = 12000
   rpn_post_nms_top_n = 2000
   threshold = 0.7
   rpn_min_size = 16
   
   feat_len = 14
   H, W = feat_len, feat_len
   num_anchors = len(scales) * len(ratios)
   count_anchors = H * W * num_anchors
   
   def get_new_data(batch_size, ctx):
   '''
   cls_prob: (batch_size, 2 * num_anchors, H, W)
   bbox_pred: (batch_size, 4 * num_anchors, H, W)
   im_info: (batch_size, 3)
   '''
   
   cls_prob = mx.nd.empty((batch_size, 2 * num_anchors, H, W), dtype = 
np.float32, ctx = ctx)
   bbox_pred = mx.nd.empty((batch_size, 4 * num_anchors, H, W), dtype = 
np.float32, ctx = ctx)
   im_info = mx.nd.empty((batch_size, 3), dtype = np.float32, ctx = ctx)
   
   cls_prob = mx.nd.array(np.random.random(cls_prob.shape), ctx = ctx)
   bbox_pred = mx.nd.array(np.random.random(bbox_pred.shape), ctx = ctx)
   
   for i in range(batch_size):
   im_size = np.random.randint(100, feat_len * feature_stride, size 
= (2,))
   im_scale = np.random.randint(70, 100) / 100.0
   im_info[i, :] = [im_size[0], im_size[1], im_scale]
   return cls_prob, bbox_pred, im_info
   
   def check_proposal_consistency(op, batch_size):
   '''
   op is mx.nd.contrib.Proposal or mx.nd.contrib.MultiProposal
   '''
   cls_prob, bbox_pred, im_info = get_new_data(batch_size, mx.cpu(0))
   rois_cpu, score_cpu = op(
   cls_score = cls_prob,
   bbox_pred = bbox_pred,
   im_info = im_info,
   feature_stride = feature_stride,
   scales = scales,
   ratios = ratios,
   rpn_pre_nms_top_n = rpn_pre_nms_top_n,
   rpn_post_nms_top_n = rpn_post_nms_top_n,
   threshold = threshold,
   rpn_min_size = rpn_min_size, output_score = True)
   
   gpu_ctx = mx.gpu(0)
   
   # copy data to gpu from cpu
   cls_prob_gpu = cls_prob.as_in_context(gpu_ctx)
   bbox_pred_gpu = bbox_pred.as_in_context(gpu_ctx)
   im_info_gpu = im_info.as_in_context(gpu_ctx)
   
   rois_gpu, score_gpu = op(
   cls_score = cls_prob_gpu,
   bbox_pred = bbox_pred_gpu,
   im_info = im_info_gpu,
   feature_stride = feature_stride,
   scales = scales,
   ratios = ratios,
   rpn_pre_nms_top_n = rpn_pre_nms_top_n,
   rpn_post_nms_top_n = rpn_post_nms_top_n,
   threshold = threshold,
   rpn_min_size = rpn_min_size, output_score = True)
   
   print (rois_cpu.asnumpy(), rois_gpu.asnumpy())
   assert np.allclose(rois_cpu.asnumpy(), rois_gpu.asnumpy())
   assert np.allclose(score_cpu.asnumpy(), score_gpu.asnumpy())
   
   check_proposal_consistency(mx.nd.contrib.Proposal, 1)
   check_proposal_consistency(mx.nd.contrib.MultiProposal, 20)
   
   test_multi_proposal_op()
   print ("test ok")
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-01 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-369797499
 
 
   I wrote a cpu/gpu consistency test for Proposal and MultiProposal.
   I found there is difference between the CPU output and the GPU output for 
mx.nd.contrib.Proposal.
   
   It seems that the index order of the Non-Maximum-Suppression result may be 
different.
   And another problem is that it may need to add the condition `num_to_keep < 
rpn_post_nms_top_n`
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/proposal.cu#L341
   reference: 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/proposal.cc#L235
   
   Here is the cpu/gpu consistency test.
   ```python
   import mxnet as mx
   import numpy as np
   
   # @with_seed()
   def test_multi_proposal_op():
   # paramters
   feature_stride = 16
   scales = (8, 16, 32)
   ratios = (0.5, 1, 2)
   rpn_pre_nms_top_n = 12000
   rpn_post_nms_top_n = 2000
   threshold = 0.7
   rpn_min_size = 16
   
   feat_len = 14
   H, W = feat_len, feat_len
   num_anchors = len(scales) * len(ratios)
   count_anchors = H * W * num_anchors
   
   def get_new_data(batch_size, ctx):
   '''
   cls_prob: (batch_size, 2 * num_anchors, H, W)
   bbox_pred: (batch_size, 4 * num_anchors, H, W)
   im_info: (batch_size, 3)
   '''
   
   cls_prob = mx.nd.empty((batch_size, 2 * num_anchors, H, W), dtype = 
np.float32, ctx = ctx)
   bbox_pred = mx.nd.empty((batch_size, 4 * num_anchors, H, W), dtype = 
np.float32, ctx = ctx)
   im_info = mx.nd.empty((batch_size, 3), dtype = np.float32, ctx = ctx)
   
   cls_prob = mx.nd.array(np.random.random(cls_prob.shape), ctx = ctx)
   bbox_pred = mx.nd.array(np.random.random(bbox_pred.shape), ctx = ctx)
   
   for i in range(batch_size):
   im_size = np.random.randint(100, feat_len * feature_stride, size 
= (2,))
   im_scale = np.random.randint(70, 100) / 100.0
   im_info[i, :] = [im_size[0], im_size[1], im_scale]
   return cls_prob, bbox_pred, im_info
   
   def check_proposal_consistency(op, batch_size):
   '''
   op is mx.nd.contrib.Proposal or mx.nd.contrib.MultiProposal
   '''
   cls_prob, bbox_pred, im_info = get_new_data(batch_size, mx.cpu(0))
   rois_cpu, score_cpu = op(
   cls_score = cls_prob,
   bbox_pred = bbox_pred,
   im_info = im_info,
   feature_stride = feature_stride,
   scales = scales,
   ratios = ratios,
   rpn_pre_nms_top_n = rpn_pre_nms_top_n,
   rpn_post_nms_top_n = rpn_post_nms_top_n,
   threshold = threshold,
   rpn_min_size = rpn_min_size, output_score = True)
   
   gpu_ctx = mx.gpu(0)
   
   # copy data to gpu from cpu
   cls_prob_gpu = cls_prob.as_in_context(gpu_ctx)
   bbox_pred_gpu = bbox_pred.as_in_context(gpu_ctx)
   im_info_gpu = im_info.as_in_context(gpu_ctx)
   
   rois_gpu, score_gpu = op(
   cls_score = cls_prob_gpu,
   bbox_pred = bbox_pred_gpu,
   im_info = im_info_gpu,
   feature_stride = feature_stride,
   scales = scales,
   ratios = ratios,
   rpn_pre_nms_top_n = rpn_pre_nms_top_n,
   rpn_post_nms_top_n = rpn_post_nms_top_n,
   threshold = threshold,
   rpn_min_size = rpn_min_size, output_score = True)
   
   print (rois_cpu.asnumpy(), rois_gpu.asnumpy())
   assert np.allclose(rois_cpu.asnumpy(), rois_gpu.asnumpy())
   assert np.allclose(score_cpu.asnumpy(), score_gpu.asnumpy())
   
   check_proposal_consistency(mx.nd.contrib.Proposal, 1)
   check_proposal_consistency(mx.nd.contrib.MultiProposal, 20)
   
   test_multi_proposal_op()
   print ("test ok")
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-01 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-369797499
 
 
   I wrote a cpu/gpu consistency test for Proposal and MultiProposal.
   I found there is difference between the CPU output and the GPU output for 
mx.nd.contrib.Proposal.
   
   It seems that the index order of Nonmaximum may be different.
   And another problem is that it may need to add the condition `num_to_keep < 
rpn_post_nms_top_n`
   
https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/proposal.cu#L341
   reference: 
https://github.com/apache/incubator-mxnet/blob/master/src/operator/contrib/proposal.cc#L235
   
   Here is the cpu/gpu consistency test.
   ```python
   import mxnet as mx
   import numpy as np
   
   # @with_seed()
   def test_multi_proposal_op():
   # paramters
   feature_stride = 16
   scales = (8, 16, 32)
   ratios = (0.5, 1, 2)
   rpn_pre_nms_top_n = 12000
   rpn_post_nms_top_n = 2000
   threshold = 0.7
   rpn_min_size = 16
   
   feat_len = 14
   H, W = feat_len, feat_len
   num_anchors = len(scales) * len(ratios)
   count_anchors = H * W * num_anchors
   
   def get_new_data(batch_size, ctx):
   '''
   cls_prob: (batch_size, 2 * num_anchors, H, W)
   bbox_pred: (batch_size, 4 * num_anchors, H, W)
   im_info: (batch_size, 3)
   '''
   
   cls_prob = mx.nd.empty((batch_size, 2 * num_anchors, H, W), dtype = 
np.float32, ctx = ctx)
   bbox_pred = mx.nd.empty((batch_size, 4 * num_anchors, H, W), dtype = 
np.float32, ctx = ctx)
   im_info = mx.nd.empty((batch_size, 3), dtype = np.float32, ctx = ctx)
   
   cls_prob = mx.nd.array(np.random.random(cls_prob.shape), ctx = ctx)
   bbox_pred = mx.nd.array(np.random.random(bbox_pred.shape), ctx = ctx)
   
   for i in range(batch_size):
   im_size = np.random.randint(100, feat_len * feature_stride, size 
= (2,))
   im_scale = np.random.randint(70, 100) / 100.0
   im_info[i, :] = [im_size[0], im_size[1], im_scale]
   return cls_prob, bbox_pred, im_info
   
   def check_proposal_consistency(op, batch_size):
   '''
   op is mx.nd.contrib.Proposal or mx.nd.contrib.MultiProposal
   '''
   cls_prob, bbox_pred, im_info = get_new_data(batch_size, mx.cpu(0))
   rois_cpu, score_cpu = op(
   cls_score = cls_prob,
   bbox_pred = bbox_pred,
   im_info = im_info,
   feature_stride = feature_stride,
   scales = scales,
   ratios = ratios,
   rpn_pre_nms_top_n = rpn_pre_nms_top_n,
   rpn_post_nms_top_n = rpn_post_nms_top_n,
   threshold = threshold,
   rpn_min_size = rpn_min_size, output_score = True)
   
   gpu_ctx = mx.gpu(0)
   
   # copy data to gpu from cpu
   cls_prob_gpu = cls_prob.as_in_context(gpu_ctx)
   bbox_pred_gpu = bbox_pred.as_in_context(gpu_ctx)
   im_info_gpu = im_info.as_in_context(gpu_ctx)
   
   rois_gpu, score_gpu = op(
   cls_score = cls_prob_gpu,
   bbox_pred = bbox_pred_gpu,
   im_info = im_info_gpu,
   feature_stride = feature_stride,
   scales = scales,
   ratios = ratios,
   rpn_pre_nms_top_n = rpn_pre_nms_top_n,
   rpn_post_nms_top_n = rpn_post_nms_top_n,
   threshold = threshold,
   rpn_min_size = rpn_min_size, output_score = True)
   
   print (rois_cpu.asnumpy(), rois_gpu.asnumpy())
   assert np.allclose(rois_cpu.asnumpy(), rois_gpu.asnumpy())
   assert np.allclose(score_cpu.asnumpy(), score_gpu.asnumpy())
   
   check_proposal_consistency(mx.nd.contrib.Proposal, 1)
   check_proposal_consistency(mx.nd.contrib.MultiProposal, 20)
   
   test_multi_proposal_op()
   print ("test ok")
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wkcn commented on issue #9939: add multi proposal operator (cpu version) and fix the bug in proposal op (gpu version)

2018-03-01 Thread GitBox
wkcn commented on issue #9939: add multi proposal operator (cpu version) and 
fix the bug in proposal op (gpu version)
URL: https://github.com/apache/incubator-mxnet/pull/9939#issuecomment-369773050
 
 
   @piiswrong Yes, I will add it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services