michallenc commented on PR #17340:
URL: https://github.com/apache/nuttx/pull/17340#issuecomment-3592646765

   > > @pkarashchenko  it's a possibility, but I think we shouldn't give more 
space to the bootloader unless really necessary - and there are options we can 
disable without affecting its functionality. Reducing scratch size means more 
copying and erasing in that area, thus more flash wear. The current size 128 kB 
also has an advantage that we can fit the bootloader into first flash's sector.
   > 
   > Ok. Then we can disable some functionality. I tried to compile the current 
configuration with the current MCUboot hash and see that the result binary is 
close to 128K (uses 99.6% of allocated flash). Taking into consideration wear 
leveling is good, but what if the next MCUboot version will add some code and 
the resulting image will overflow the allocated size for 100bytes more? There 
are not many things to disable left. And having nsh for bootloader sometimes is 
useful. At least in my case I was adding some command to display version 
information or initiating OTA update by a command. SAMe70 512Kb flash variant 
is not very usable in terms of MCUboot and was here for demo purposes, so we 
can ever drop that configuration and keep only options with bigger flash. I'm 
just looking for some solution that is usable and scalable. The bigger flash 
SAMv7 options still have a sufficient scratch area. I do not have any 
objections on the current change if that is needed to unblock merge of th
 e other change however think that we need some better scalable solution
   
   Oh, I expected we would be around 60 kB with these changes and have a lot of 
reserved space. Ok then, that changes things and I suppose we can reserve more 
space for the bootloader.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to