I am not a trained mathematician.
My question: would Gödel's incompleteness theorem not set a limit to what 
can be proven 'automatically'? 

syle...@gmail.com schrieb am Montag, 19. August 2024 um 02:34:15 UTC+2:

> AI achieves silver-medal standard solving International Mathematical 
> Olympiad problems - Google DeepMind 
> <https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/>
>
> Recently, Google had announced the result that their AI model, AlphaProof 
> and AlphaGeometry can silver medal in IMO problems. Their system is hybrid 
> of symbolic models, and uses proof assistant Lean as backend, which 
> guarantees that the proof can be verified automatically. 
> ChatGPT had many problems that it can hallucinate the steps of proof, and 
> keep human verifying their result, as well as understaing the steps, so 
> expressing proof as formal proof statements is a gain.
>
> I think that the research reinforces that solve or simplify, or integral 
> is losing competition. Because a lot of them are written with heuristics 
> that won't win with AI, and we also have concerns about code around them 
> are getting messy.
>
> I think that if we want to avoid the losing competition, and make AI 
> systems work collaborative, symbolic computation should be focused to solve 
> only a few 'formal' problems in 100% precision and speed. 
>
> I already notice that there is research to connect Wu's method to 
> AlphaGeometry
> [2404.06405] Wu's Method can Boost Symbolic AI to Rival Silver Medalists 
> and AlphaGeometry to Outperform Gold Medalists at IMO Geometry (arxiv.org) 
> <https://arxiv.org/abs/2404.06405>
> Although symbolic system would no longer competitive solution to general 
> math problems, the 'formal' symbolic systems can still be valued. (I also 
> hear that AlphaGeometry2 is using Wu's method, but I'm trying to verify the 
> sources)
>
> I also think that such advances in AI systems can raise concerns about 
> software engineering careers, or educational system, which may be 
> interesting for some readers in the forum.
>
> For example, math exams can be pointless in the future, even to identify 
> and train good science or engineers in the future, because trained AI 
> models can beat IMO. I think that in AI age, the education should change, 
> such that it is not bearing through boring and repetitive systems, which 
> does not even reflect the capability of future engineers or scientists.
>
> Also, I notice that software engineering is changing, because AI models 
> can complete a lot of code, and precision is improving, or people are 
> improving the skills of prompting. 
> It also seems to be deprecating code sharing efforts for open source 
> communities, because code can be generated rather than shared.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sympy/e83bcde2-3c0f-4b25-ac8b-f31f2a8bf5fen%40googlegroups.com.

Reply via email to