Unless MXNet team fixed this, I would suggest "hacking" the issue by generating 
the synthetic dataset with the use of both "positive" and "negative" samples in 
the following way. Keep the annotated images as they are, crop some boxes out 
of them and embed crops into the negative images, say, one crop per negative 
image to some random position. Everything outside the crop should be considered 
as the background, that you probably want to achieve.





---
[Visit 
Topic](https://discuss.mxnet.apache.org/t/how-to-use-an-image-without-targets-negative-image-as-a-train-image-to-train-a-detection-net/3978/4)
 or reply to this email to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.mxnet.apache.org/email/unsubscribe/31c5640e2eb2018f2ed7be06ba9b4c7aa70f88d08791d41736ad028beee85281).

Reply via email to