Hi folks,

Recap:
The modern practice of AI has blurred the boundary between the code and data,
which leads to some potential ambiguity to the interpretation of the definition 
of
open source as well as the respective licenses. Such ambiguous interpretation
in fact deviates from and violates the spirit of free software.

Several years ago I pointed out this issue on -devel, and eventually drafted 
ML-Policy [2].
Then OSI formally realized this issue in the last year, and invited me to 
contribute some
thoughts in their Deep Dive: AI event. Now the final report is available here: 
[1]
This is a summary of people's discussions from various field.

You may have tried ChatGPT recently -- this field develops rapidly, and some of 
the
state-of-the-art AIs could be astonishing if you have never tried something 
alike in the past.
If there will be some monopolistic proprietary AGI (artificial general 
intelligence) in the future,
I personally fear of its potential capability of being evil. This resembles a 
part of the
history of free software in the last decade.

Anyway, from the Debian side, we at least know that we should be careful
when dealing with AI software.

[1] 
https://deepdive.opensource.org/wp-content/uploads/2023/02/Deep-Dive-AI-final-report.pdf
[2] https://salsa.debian.org/deeplearning-team/ml-policy

Reply via email to