Wrong answers can be challenged in AI. You can ask AI where the references come from. You can suggest alternatives and get a changed response. This isn't just a better google. This is useful to me. Work with the AI and together you can work out things that are difficult on your own. Lennie
-----Original Message----- From: IBM Mainframe Discussion List <[email protected]> On Behalf Of Andrew Rowley Sent: 25 May 2025 23:56 To: [email protected] Subject: Re: RTFM On 26/05/2025 3:31 am, Lennie Bradshaw wrote: > I have found AI to be really useful in negotiating the wealth of IBM > documentation. I can ask AI a question that is pretty detailed and get a > meaningful response. It is not always right, but I can then discuss it and > refine it with the AI and find the correct answer. My attempts to use AI for mainframe related stuff have yielded 100% wrong answers. Some have been quite convincing and taken some research to find it's wrong, but still 100% wrong. Non-mainframe stuff I would say the rate is about 50%. If there's a lot of examples of something out there it maybe generates a good result. But there's no actual intelligence in AI. What worries me most is what happens when knowledge changes, e.g. if a new feature is developed that makes old information obsolete. How does that get incorporated in the AI model? Where does the training data come from, and how does the model verify the reliability? -- Andrew Rowley Black Hill Software ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [email protected] with the message: INFO IBM-MAIN
