Something weird here, why is it so hard to have a deterministic program capable of checking a proof or anything math related, aren't maths super deterministic when natural language is not. From first principles, it should be possible to do this without a llm verifier.
I think that mathematical proofs, as they are actually written, rely on natural language and on a large amount of implicit shared knowledge. They are not formalized in the Principia Mathematica sense, and they are even further from the syntax required by modern theorem provers. Even the most rigorous proofs such as those in Bourbaki are not directly translatable into a fully formal system.
Maths can be super deterministic but often difficult to compute because of concepts like inferring by induction. I had to personally unlearn and rebase my understanding of math based in computation to 'get' pure maths. Another example is set building. You often don't need to compute the existence of members of sets in pure math you just need to agree that there are some members of a set that meet the criteria. How many or how many things that aren't in the set aren't meaningful often times to accept something and move on with the proof. From the computing perspective this can be difficult to put together.
I haven’t read the paper yet, but I’d imagine the issue is converting the natural language generated by the reasoner into a form where a formal verifier can be applied.
such high performance program indeed could potentially be superior, if it would exist (this area is very undeveloped, there is no existing distributed well established solution which could handle large domain) and math would be formalized in that program's dsl, which also didn't happen yet.
It's cool, but I genuinely cannot fathom why they are targeting natural language proofs instead of a proof assistant.
Natural language is a lot more, well, readable than say lean. You get a lot less intuition and understanding of what the model is attempting to do in the first place.
So it's designed for informal proofs and it "verifies" based on a rubric fitting function and human interaction, is that right?
What's the use case for a system like this?
Exciting stuff from a fantastic team.