I was trying to pack my pytorch code using the command , some of the 
pytorch packages get deprecated while doing it. does anyone know any 
alternate ways or any changes in the command to properly pack those 
packages? im including the deprecation log in this.

(venv) PS D:\ai> pyinstaller --onefile --windowed --name AIQualityCheckerv2 
--hidden-import=torchvision.models --hidden-import=ultralytics 
--hidden-import=watchdog.observers --hidden-import=watchdog.events 
--collect-all ultralytics applicationv11.py

32219 INFO: Processing standard module hook ‘hook-torch.py’ from 
'D:\ai\venv\Lib\site-packages\


*pyinstaller_hooks_contrib\stdhooks’1501 WARNING: Failed to collect 
submodules for ‘torch.utils.tensorboard’ because importing 
‘torch.utils.tensorboard’ raised: ModuleNotFoundError: No module named 
‘tensorboard’W1031 20:12:52.144000 19208 
Lib\site-packages\torch\distributed\elastic\multiprocessing\redirects.py:29] 
NOTE: Redirects are currently not supported in Windows or 
MacOs.D:\ai\venv\Lib\site-packages\torch\distributed\algorithms\ddp_comm_hooks_init*.py:54:
 
FutureWarning: functools.partial will be a method descriptor in future 
Python versions; wrap it in enum.member() if you want to preserve the old 
behavior
ALLREDUCE = partial(
*ddp_comm_hook_wrapper, 
comm_hook=default.allreduce_hook)D:\ai\venv\Lib\site-packages\torch\distributed\algorithms\ddp_comm_hooks_init*.py:55:
 
FutureWarning: functools.partial will be a method descriptor in future 
Python versions; wrap it in enum.member() if you want to preserve the old 
behavior
FP16_COMPRESS = partial(
D:\ai\venv\Lib\site-packages\torch\distributed\algorithms\ddp_comm_hooks_
*init*_.py:58: FutureWarning: functools.partial will be a method descriptor 
in future Python versions; wrap it in enum.member() if you want to preserve 
the old behavior
BF16_COMPRESS = partial(
D:\ai\venv\Lib\site-packages\torch\distributed\algorithms\ddp_comm_hooks_
*init*_.py:61: FutureWarning: functools.partial will be a method descriptor 
in future Python versions; wrap it in enum.member() if you want to preserve 
the old behavior
QUANTIZE_PER_TENSOR = partial(
D:\ai\venv\Lib\site-packages\torch\distributed\algorithms\ddp_comm_hooks_
*init*_.py:64: FutureWarning: functools.partial will be a method descriptor 
in future Python versions; wrap it in enum.member() if you want to preserve 
the old behavior
QUANTIZE_PER_CHANNEL = partial(
D:\ai\venv\Lib\site-packages\torch\distributed\algorithms\ddp_comm_hooks_
*init*_.py:67: FutureWarning: functools.partial will be a method descriptor 
in future Python versions; wrap it in enum.member() if you want to preserve 
the old behavior
POWER_SGD = partial(
D:\ai\venv\Lib\site-packages\torch\distributed\algorithms\ddp_comm_hooks_
*init*_.py:74: FutureWarning: functools.partial will be a method descriptor 
in future Python versions; wrap it in enum.member() if you want to preserve 
the old behavior
POWER_SGD_RANK2 = partial(
D:\ai\venv\Lib\site-packages\torch\distributed\algorithms\ddp_comm_hooks_
*init*_.py:80: FutureWarning: functools.partial will be a method descriptor 
in future Python versions; wrap it in enum.member() if you want to preserve 
the old behavior
BATCHED_POWER_SGD = partial(
D:\ai\venv\Lib\site-packages\torch\distributed\algorithms\ddp_comm_hooks_
*init*_.py:85: FutureWarning: functools.partial will be a method descriptor 
in future Python versions; wrap it in enum.member() if you want to preserve 
the old behavior
BATCHED_POWER_SGD_RANK2 = partial(
D:\ai\venv\Lib\site-packages\torch\distributed\algorithms\ddp_comm_hooks_
*init*_.py:90: FutureWarning: functools.partial will be a method descriptor 
in future Python versions; wrap it in enum.member() if you want to preserve 
the old behavior
NOOP = partial(
D:\ai\venv\Lib\site-packages\PyInstaller\utils\hooks_*init*_.py:665: 
DeprecationWarning: torch.distributed._sharding_spec will be deprecated, 
use torch.distributed._shard.sharding_spec instead
import(name)
D:\ai\venv\Lib\site-packages\PyInstaller\utils\hooks_*init*_.py:665: 
DeprecationWarning: torch.distributed._sharded_tensor will be deprecated, 
use torch.distributed._shard.sharded_tensor instead
import(name)
D:\ai\venv\Lib\site-packages\PyInstaller\utils\hooks_*init*_.py:665: 
DeprecationWarning: torch.distributed._shard.checkpoint will be deprecated, 
use torch.distributed.checkpoint instead
import(name)
52587 INFO: hook-torch: this torch build does not depend on MKL…
54695 INFO: Processing standard module hook ‘hook-matplotlib.py’ from 
‘D:\ai\venv\Lib\site-packages\PyInstaller\hooks’
55239 INFO: Processing pre-safe-import-module hook ‘hook-packaging.py’ from 
‘D:\ai\venv\Lib\site-packages\PyInstaller\hooks\pre_safe_import_module’
55427 INFO: Processing pre-safe-import-module hook ‘hook-gi.py’ from 
‘D:\ai\venv\Lib\site-packages\PyInstaller\hooks\pre_safe_import_module’
55758 INFO: Processing standard module hook 
‘hook-matplotlib.backend_bases.py’ from 
‘D:\ai\venv\Lib\site-packages\PyInstaller\hooks’

-- 
You received this message because you are subscribed to the Google Groups 
"PyInstaller" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/pyinstaller/34f2ad4b-6eb5-4b4b-808f-1496c7cee4d3n%40googlegroups.com.

Reply via email to