[GitHub] tz-hmc commented on issue #8591: How do I make a siamese network with pretrained models (esp. keeping the weights the same?)

2017-11-10 Thread GitBox
tz-hmc commented on issue #8591: How do I make a siamese network with 
pretrained models (esp. keeping the weights the same?)
URL: 
https://github.com/apache/incubator-mxnet/issues/8591#issuecomment-343438692
 
 
   Wow. Thank you so much. Alright, this gives me a lot to think about.
   I'm really grateful for your help, thanks a ton.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tz-hmc commented on issue #8591: How do I make a siamese network with pretrained models (esp. keeping the weights the same?)

2017-11-09 Thread GitBox
tz-hmc commented on issue #8591: How do I make a siamese network with 
pretrained models (esp. keeping the weights the same?)
URL: 
https://github.com/apache/incubator-mxnet/issues/8591#issuecomment-343367913
 
 
   Hey again,
   
   I tried something like this, but I still have a lot of questions:
   
   ```
   sym1, arg_params, aux_params = get_model()
   sym2, arg_params, aux_params = get_model()
   
   mod1 = mx.mod.Module(symbol=sym1, context=mx.cpu(), label_names=None)
   mod2 = mx.mod.Module(symbol=sym2, context=mx.cpu(), label_names=None)
   mod1.bind(for_training=True, shared_module=mod2, data_shapes=[('data', 
(1,3,224,224))], # true to train
label_shapes=mod1._label_shapes)
   mod2.bind(for_training=True, shared_module=mod1, data_shapes=[('data', 
(1,3,224,224))], # true to train
label_shapes=mod2._label_shapes)
   mod1.set_params(arg_params, aux_params, allow_missing=True)
   mod2.set_params(arg_params, aux_params, allow_missing=True)
   
   out1 = sym1.get_internals()['flatten0_output']
   out2 = sym2.get_internals()['flatten0_output']
   siamese_out = mx.sym.Concat(out1, out2, dim=0)
   
   # Example stacked network after it
   fc1  = mx.symbol.FullyConnected(data = siamese_out, name='fc1', 
num_hidden=128)
   act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")
   fc2  = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 
64)
   act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")
   fc3  = mx.symbol.FullyConnected(data = act2, name='fc3', 
num_hidden=num_classes)
   mlp  = mx.symbol.SoftmaxOutput(data = fc3, name = 'softmax')
   # new_args = dict()
   
   mod3 = mx.mod.Module(fc1, context=mx.cpu(), label_names=None)
   mod3 = fe_mod.bind(for_training=False, data_shapes=[('data', 
(1,3,224,224))])
   mod3.set_params(arg_params, aux_params)
   ```
   
   I only want the first part of this network (layers attached to mod2 & mod1) 
to be shared. Would something like this work & still backpropagate errors 
appropriately when fitted?
   
   Having to run mod.fit on each part of the network could be inconvenient. Is 
there a way around this? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tz-hmc commented on issue #8591: How do I make a siamese network with pretrained models (esp. keeping the weights the same?)

2017-11-09 Thread GitBox
tz-hmc commented on issue #8591: How do I make a siamese network with 
pretrained models (esp. keeping the weights the same?)
URL: 
https://github.com/apache/incubator-mxnet/issues/8591#issuecomment-343367913
 
 
   Hey again,
   
   I tried something like this, but I still have a lot of questions:
   
   ```
   sym1, arg_params, aux_params = get_model()
   sym2, arg_params, aux_params = get_model()
   
   mod1 = mx.mod.Module(symbol=sym1, context=mx.cpu(), label_names=None)
   mod2 = mx.mod.Module(symbol=sym2, context=mx.cpu(), label_names=None)
   mod1.bind(for_training=True, shared_module=mod2, data_shapes=[('data', 
(1,3,224,224))], # true to train
label_shapes=mod1._label_shapes)
   mod2.bind(for_training=True, shared_module=mod1, data_shapes=[('data', 
(1,3,224,224))], # true to train
label_shapes=mod2._label_shapes)
   mod1.set_params(arg_params, aux_params, allow_missing=True)
   mod2.set_params(arg_params, aux_params, allow_missing=True)
   
   out1 = sym1.get_internals()['flatten0_output']
   out2 = sym2.get_internals()['flatten0_output']
   siamese_out = mx.sym.Concat(out1, out2, dim=0)
   
   # Stacked network after it
   fc1  = mx.symbol.FullyConnected(data = siamese_out, name='fc1', 
num_hidden=128)
   act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")
   fc2  = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 
64)
   act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")
   fc3  = mx.symbol.FullyConnected(data = act2, name='fc3', 
num_hidden=num_classes)
   mlp  = mx.symbol.SoftmaxOutput(data = fc3, name = 'softmax')
   # new_args = dict()
   
   mod3 = mx.mod.Module(fc1, context=mx.cpu(), label_names=None)
   mod3 = fe_mod.bind(for_training=False, data_shapes=[('data', 
(1,3,224,224))])
   mod3.set_params(arg_params, aux_params)
   ```
   
   I only want the first part of this network (layers attached to mod2 & mod1) 
to be shared. Would something like this work & still backpropagate errors 
appropriately when fitted?
   
   Having to run mod.fit on each part of the network could be inconvenient. Is 
there a way around this? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tz-hmc commented on issue #8591: How do I make a siamese network with pretrained models (esp. keeping the weights the same?)

2017-11-09 Thread GitBox
tz-hmc commented on issue #8591: How do I make a siamese network with 
pretrained models (esp. keeping the weights the same?)
URL: 
https://github.com/apache/incubator-mxnet/issues/8591#issuecomment-343325480
 
 
   I want to run these identical networks in parallel though (Siamese network 
style) and then concatenate the output layer,then  feed into constrastive loss 
layer. I think shared_module bind will not allow me to share the weights? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tz-hmc commented on issue #8591: How do I make a siamese network with pretrained models (esp. keeping the weights the same?)

2017-11-09 Thread GitBox
tz-hmc commented on issue #8591: How do I make a siamese network with 
pretrained models (esp. keeping the weights the same?)
URL: 
https://github.com/apache/incubator-mxnet/issues/8591#issuecomment-343325480
 
 
   I want to run these identical networks in parallel though (Siamese network 
style) and then concatenate the output layer,then  feed into constrastive loss 
layer. I think shared_module bind will not allow me to share the weights? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tz-hmc commented on issue #8591: How do I make a siamese network with pretrained models (esp. keeping the weights the same?)

2017-11-08 Thread GitBox
tz-hmc commented on issue #8591: How do I make a siamese network with 
pretrained models (esp. keeping the weights the same?)
URL: 
https://github.com/apache/incubator-mxnet/issues/8591#issuecomment-342970864
 
 
   Wow, I didn't know that API existed. I had a lot of trouble trying to make 
it work with the module API but the Gluon API looks super promising, thanks for 
sharing :) 
   
   However, though I'll definitely test Gluon out, do you know how I would do 
this with the Module API?
   
   Can I extract the each layer's functionality somehow and set the weights to 
the same variable as its identical layer in the other network? If it's too big 
a hassle, I guess I would use Gluon, though all the other code I have uses the 
Module API.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tz-hmc commented on issue #8591: How do I make a siamese network with pretrained models (esp. keeping the weights the same?)

2017-11-08 Thread GitBox
tz-hmc commented on issue #8591: How do I make a siamese network with 
pretrained models (esp. keeping the weights the same?)
URL: 
https://github.com/apache/incubator-mxnet/issues/8591#issuecomment-342970864
 
 
   Wow, I didn't know that API existed. I had a lot of trouble trying to make 
it work with the module API but the Gluon API looks super promising, thanks for 
sharing :) 
   
   However, though I'll definitely test Gluon out, do you know how I would do 
this with the Module API? Can I extract the each layer's functionality somehow 
and set the weights to the same variable as its identical layer in the other 
network? If it's too big a hassle, I guess I would use Gluon, though all the 
other code I have uses the Module API.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tz-hmc commented on issue #8591: How do I make a siamese network with pretrained models (esp. keeping the weights the same?)

2017-11-08 Thread GitBox
tz-hmc commented on issue #8591: How do I make a siamese network with 
pretrained models (esp. keeping the weights the same?)
URL: 
https://github.com/apache/incubator-mxnet/issues/8591#issuecomment-342970864
 
 
   I had a lot of trouble trying to make it work with the module API. But the 
Gluon API looks super promising, thanks for sharing :) 
   
   However, though I'll definitely test Gluon out, do you know how I would do 
this with the Module API? Can I extract the each layer's functionality somehow 
and set the weights to the same variable as its identical layer in the other 
network? If it's too big a hassle, I guess I would use Gluon, though all the 
other code I have uses the Module API.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services